00:00:00.001 Started by upstream project "autotest-per-patch" build number 126173 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "jbp-per-patch" build number 23928 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.033 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:01.314 The recommended git tool is: git 00:00:01.314 using credential 00000000-0000-0000-0000-000000000002 00:00:01.316 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:01.329 Fetching changes from the remote Git repository 00:00:01.331 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:01.346 Using shallow fetch with depth 1 00:00:01.346 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:01.346 > git --version # timeout=10 00:00:01.357 > git --version # 'git version 2.39.2' 00:00:01.357 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:01.369 Setting http proxy: proxy-dmz.intel.com:911 00:00:01.369 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/71/24171/1 # timeout=5 00:00:07.316 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.329 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.342 Checking out Revision f574307dba849e7d22dd5631ce9e594362bd2ebc (FETCH_HEAD) 00:00:07.342 > git config core.sparsecheckout # timeout=10 00:00:07.355 > git read-tree -mu HEAD # timeout=10 00:00:07.374 > git checkout -f f574307dba849e7d22dd5631ce9e594362bd2ebc # timeout=5 00:00:07.402 Commit message: "packer: Drop centos7" 00:00:07.402 > git rev-list --no-walk 1cada6d681c9931648d947263dba569d3956eaf1 # timeout=10 00:00:07.536 [Pipeline] Start of Pipeline 00:00:07.555 [Pipeline] library 00:00:07.557 Loading library shm_lib@master 00:00:07.557 Library shm_lib@master is cached. Copying from home. 00:00:07.576 [Pipeline] node 00:00:22.579 Still waiting to schedule task 00:00:22.579 Waiting for next available executor on ‘vagrant-vm-host’ 00:11:09.152 Running on VM-host-WFP7 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:11:09.153 [Pipeline] { 00:11:09.163 [Pipeline] catchError 00:11:09.165 [Pipeline] { 00:11:09.179 [Pipeline] wrap 00:11:09.188 [Pipeline] { 00:11:09.197 [Pipeline] stage 00:11:09.198 [Pipeline] { (Prologue) 00:11:09.217 [Pipeline] echo 00:11:09.218 Node: VM-host-WFP7 00:11:09.223 [Pipeline] cleanWs 00:11:09.232 [WS-CLEANUP] Deleting project workspace... 00:11:09.232 [WS-CLEANUP] Deferred wipeout is used... 00:11:09.238 [WS-CLEANUP] done 00:11:09.411 [Pipeline] setCustomBuildProperty 00:11:09.495 [Pipeline] httpRequest 00:11:09.520 [Pipeline] echo 00:11:09.521 Sorcerer 10.211.164.101 is alive 00:11:09.530 [Pipeline] httpRequest 00:11:09.534 HttpMethod: GET 00:11:09.535 URL: http://10.211.164.101/packages/jbp_f574307dba849e7d22dd5631ce9e594362bd2ebc.tar.gz 00:11:09.536 Sending request to url: http://10.211.164.101/packages/jbp_f574307dba849e7d22dd5631ce9e594362bd2ebc.tar.gz 00:11:09.537 Response Code: HTTP/1.1 200 OK 00:11:09.537 Success: Status code 200 is in the accepted range: 200,404 00:11:09.537 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_f574307dba849e7d22dd5631ce9e594362bd2ebc.tar.gz 00:11:09.683 [Pipeline] sh 00:11:09.966 + tar --no-same-owner -xf jbp_f574307dba849e7d22dd5631ce9e594362bd2ebc.tar.gz 00:11:09.982 [Pipeline] httpRequest 00:11:09.996 [Pipeline] echo 00:11:09.997 Sorcerer 10.211.164.101 is alive 00:11:10.005 [Pipeline] httpRequest 00:11:10.009 HttpMethod: GET 00:11:10.009 URL: http://10.211.164.101/packages/spdk_2728651eeb6994be786e188da61cae84c5bb49ac.tar.gz 00:11:10.010 Sending request to url: http://10.211.164.101/packages/spdk_2728651eeb6994be786e188da61cae84c5bb49ac.tar.gz 00:11:10.011 Response Code: HTTP/1.1 200 OK 00:11:10.011 Success: Status code 200 is in the accepted range: 200,404 00:11:10.011 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_2728651eeb6994be786e188da61cae84c5bb49ac.tar.gz 00:11:12.306 [Pipeline] sh 00:11:12.581 + tar --no-same-owner -xf spdk_2728651eeb6994be786e188da61cae84c5bb49ac.tar.gz 00:11:15.183 [Pipeline] sh 00:11:15.460 + git -C spdk log --oneline -n5 00:11:15.460 2728651ee accel: adjust task per ch define name 00:11:15.460 e7cce062d Examples/Perf: correct the calculation of total bandwidth 00:11:15.460 3b4b1d00c libvfio-user: bump MAX_DMA_REGIONS 00:11:15.460 32a79de81 lib/event: add disable_cpumask_locks to spdk_app_opts 00:11:15.460 719d03c6a sock/uring: only register net impl if supported 00:11:15.480 [Pipeline] writeFile 00:11:15.495 [Pipeline] sh 00:11:15.776 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:11:15.789 [Pipeline] sh 00:11:16.069 + cat autorun-spdk.conf 00:11:16.069 SPDK_RUN_FUNCTIONAL_TEST=1 00:11:16.069 SPDK_TEST_NVMF=1 00:11:16.069 SPDK_TEST_NVMF_TRANSPORT=tcp 00:11:16.069 SPDK_TEST_USDT=1 00:11:16.069 SPDK_TEST_NVMF_MDNS=1 00:11:16.069 SPDK_RUN_UBSAN=1 00:11:16.069 NET_TYPE=virt 00:11:16.069 SPDK_JSONRPC_GO_CLIENT=1 00:11:16.069 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:11:16.075 RUN_NIGHTLY=0 00:11:16.078 [Pipeline] } 00:11:16.097 [Pipeline] // stage 00:11:16.114 [Pipeline] stage 00:11:16.116 [Pipeline] { (Run VM) 00:11:16.127 [Pipeline] sh 00:11:16.457 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:11:16.457 + echo 'Start stage prepare_nvme.sh' 00:11:16.457 Start stage prepare_nvme.sh 00:11:16.457 + [[ -n 2 ]] 00:11:16.457 + disk_prefix=ex2 00:11:16.457 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:11:16.457 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:11:16.457 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:11:16.457 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:11:16.458 ++ SPDK_TEST_NVMF=1 00:11:16.458 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:11:16.458 ++ SPDK_TEST_USDT=1 00:11:16.458 ++ SPDK_TEST_NVMF_MDNS=1 00:11:16.458 ++ SPDK_RUN_UBSAN=1 00:11:16.458 ++ NET_TYPE=virt 00:11:16.458 ++ SPDK_JSONRPC_GO_CLIENT=1 00:11:16.458 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:11:16.458 ++ RUN_NIGHTLY=0 00:11:16.458 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:11:16.458 + nvme_files=() 00:11:16.458 + declare -A nvme_files 00:11:16.458 + backend_dir=/var/lib/libvirt/images/backends 00:11:16.458 + nvme_files['nvme.img']=5G 00:11:16.458 + nvme_files['nvme-cmb.img']=5G 00:11:16.458 + nvme_files['nvme-multi0.img']=4G 00:11:16.458 + nvme_files['nvme-multi1.img']=4G 00:11:16.458 + nvme_files['nvme-multi2.img']=4G 00:11:16.458 + nvme_files['nvme-openstack.img']=8G 00:11:16.458 + nvme_files['nvme-zns.img']=5G 00:11:16.458 + (( SPDK_TEST_NVME_PMR == 1 )) 00:11:16.458 + (( SPDK_TEST_FTL == 1 )) 00:11:16.458 + (( SPDK_TEST_NVME_FDP == 1 )) 00:11:16.458 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:11:16.458 + for nvme in "${!nvme_files[@]}" 00:11:16.458 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:11:16.458 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:11:16.458 + for nvme in "${!nvme_files[@]}" 00:11:16.458 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:11:16.458 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:11:16.458 + for nvme in "${!nvme_files[@]}" 00:11:16.458 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:11:16.458 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:11:16.458 + for nvme in "${!nvme_files[@]}" 00:11:16.458 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:11:16.458 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:11:16.458 + for nvme in "${!nvme_files[@]}" 00:11:16.458 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:11:16.458 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:11:16.458 + for nvme in "${!nvme_files[@]}" 00:11:16.458 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:11:16.458 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:11:16.458 + for nvme in "${!nvme_files[@]}" 00:11:16.458 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:11:16.716 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:11:16.716 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:11:16.716 + echo 'End stage prepare_nvme.sh' 00:11:16.716 End stage prepare_nvme.sh 00:11:16.726 [Pipeline] sh 00:11:17.004 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:11:17.005 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -H -a -v -f fedora38 00:11:17.005 00:11:17.005 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:11:17.005 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:11:17.005 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:11:17.005 HELP=0 00:11:17.005 DRY_RUN=0 00:11:17.005 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img, 00:11:17.005 NVME_DISKS_TYPE=nvme,nvme, 00:11:17.005 NVME_AUTO_CREATE=0 00:11:17.005 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img, 00:11:17.005 NVME_CMB=,, 00:11:17.005 NVME_PMR=,, 00:11:17.005 NVME_ZNS=,, 00:11:17.005 NVME_MS=,, 00:11:17.005 NVME_FDP=,, 00:11:17.005 SPDK_VAGRANT_DISTRO=fedora38 00:11:17.005 SPDK_VAGRANT_VMCPU=10 00:11:17.005 SPDK_VAGRANT_VMRAM=12288 00:11:17.005 SPDK_VAGRANT_PROVIDER=libvirt 00:11:17.005 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:11:17.005 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:11:17.005 SPDK_OPENSTACK_NETWORK=0 00:11:17.005 VAGRANT_PACKAGE_BOX=0 00:11:17.005 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:11:17.005 FORCE_DISTRO=true 00:11:17.005 VAGRANT_BOX_VERSION= 00:11:17.005 EXTRA_VAGRANTFILES= 00:11:17.005 NIC_MODEL=virtio 00:11:17.005 00:11:17.005 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt' 00:11:17.005 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:11:19.535 Bringing machine 'default' up with 'libvirt' provider... 00:11:19.795 ==> default: Creating image (snapshot of base box volume). 00:11:20.053 ==> default: Creating domain with the following settings... 00:11:20.053 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721037153_47d803b14575c578f285 00:11:20.053 ==> default: -- Domain type: kvm 00:11:20.053 ==> default: -- Cpus: 10 00:11:20.053 ==> default: -- Feature: acpi 00:11:20.053 ==> default: -- Feature: apic 00:11:20.053 ==> default: -- Feature: pae 00:11:20.053 ==> default: -- Memory: 12288M 00:11:20.053 ==> default: -- Memory Backing: hugepages: 00:11:20.053 ==> default: -- Management MAC: 00:11:20.053 ==> default: -- Loader: 00:11:20.053 ==> default: -- Nvram: 00:11:20.053 ==> default: -- Base box: spdk/fedora38 00:11:20.053 ==> default: -- Storage pool: default 00:11:20.053 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721037153_47d803b14575c578f285.img (20G) 00:11:20.053 ==> default: -- Volume Cache: default 00:11:20.053 ==> default: -- Kernel: 00:11:20.053 ==> default: -- Initrd: 00:11:20.053 ==> default: -- Graphics Type: vnc 00:11:20.053 ==> default: -- Graphics Port: -1 00:11:20.053 ==> default: -- Graphics IP: 127.0.0.1 00:11:20.053 ==> default: -- Graphics Password: Not defined 00:11:20.053 ==> default: -- Video Type: cirrus 00:11:20.053 ==> default: -- Video VRAM: 9216 00:11:20.053 ==> default: -- Sound Type: 00:11:20.053 ==> default: -- Keymap: en-us 00:11:20.053 ==> default: -- TPM Path: 00:11:20.054 ==> default: -- INPUT: type=mouse, bus=ps2 00:11:20.054 ==> default: -- Command line args: 00:11:20.054 ==> default: -> value=-device, 00:11:20.054 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:11:20.054 ==> default: -> value=-drive, 00:11:20.054 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-0-drive0, 00:11:20.054 ==> default: -> value=-device, 00:11:20.054 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:11:20.054 ==> default: -> value=-device, 00:11:20.054 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:11:20.054 ==> default: -> value=-drive, 00:11:20.054 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:11:20.054 ==> default: -> value=-device, 00:11:20.054 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:11:20.054 ==> default: -> value=-drive, 00:11:20.054 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:11:20.054 ==> default: -> value=-device, 00:11:20.054 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:11:20.054 ==> default: -> value=-drive, 00:11:20.054 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:11:20.054 ==> default: -> value=-device, 00:11:20.054 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:11:20.313 ==> default: Creating shared folders metadata... 00:11:20.313 ==> default: Starting domain. 00:11:21.692 ==> default: Waiting for domain to get an IP address... 00:11:39.848 ==> default: Waiting for SSH to become available... 00:11:39.848 ==> default: Configuring and enabling network interfaces... 00:11:44.120 default: SSH address: 192.168.121.211:22 00:11:44.120 default: SSH username: vagrant 00:11:44.120 default: SSH auth method: private key 00:11:46.651 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:11:54.826 ==> default: Mounting SSHFS shared folder... 00:11:56.729 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:11:56.729 ==> default: Checking Mount.. 00:11:58.103 ==> default: Folder Successfully Mounted! 00:11:58.103 ==> default: Running provisioner: file... 00:11:59.073 default: ~/.gitconfig => .gitconfig 00:11:59.640 00:11:59.640 SUCCESS! 00:11:59.640 00:11:59.640 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:11:59.640 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:11:59.640 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:11:59.640 00:11:59.651 [Pipeline] } 00:11:59.672 [Pipeline] // stage 00:11:59.684 [Pipeline] dir 00:11:59.685 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt 00:11:59.687 [Pipeline] { 00:11:59.704 [Pipeline] catchError 00:11:59.707 [Pipeline] { 00:11:59.723 [Pipeline] sh 00:12:00.004 + vagrant ssh-config --host vagrant 00:12:00.004 + sed -ne /^Host/,$p 00:12:00.004 + tee ssh_conf 00:12:02.537 Host vagrant 00:12:02.537 HostName 192.168.121.211 00:12:02.537 User vagrant 00:12:02.537 Port 22 00:12:02.537 UserKnownHostsFile /dev/null 00:12:02.537 StrictHostKeyChecking no 00:12:02.537 PasswordAuthentication no 00:12:02.537 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:12:02.537 IdentitiesOnly yes 00:12:02.537 LogLevel FATAL 00:12:02.537 ForwardAgent yes 00:12:02.537 ForwardX11 yes 00:12:02.537 00:12:02.550 [Pipeline] withEnv 00:12:02.552 [Pipeline] { 00:12:02.565 [Pipeline] sh 00:12:02.840 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:12:02.840 source /etc/os-release 00:12:02.840 [[ -e /image.version ]] && img=$(< /image.version) 00:12:02.840 # Minimal, systemd-like check. 00:12:02.840 if [[ -e /.dockerenv ]]; then 00:12:02.840 # Clear garbage from the node's name: 00:12:02.840 # agt-er_autotest_547-896 -> autotest_547-896 00:12:02.840 # $HOSTNAME is the actual container id 00:12:02.840 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:12:02.840 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:12:02.840 # We can assume this is a mount from a host where container is running, 00:12:02.840 # so fetch its hostname to easily identify the target swarm worker. 00:12:02.840 container="$(< /etc/hostname) ($agent)" 00:12:02.840 else 00:12:02.840 # Fallback 00:12:02.840 container=$agent 00:12:02.840 fi 00:12:02.840 fi 00:12:02.840 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:12:02.840 00:12:03.109 [Pipeline] } 00:12:03.130 [Pipeline] // withEnv 00:12:03.138 [Pipeline] setCustomBuildProperty 00:12:03.152 [Pipeline] stage 00:12:03.154 [Pipeline] { (Tests) 00:12:03.171 [Pipeline] sh 00:12:03.445 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:12:03.723 [Pipeline] sh 00:12:03.999 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:12:04.270 [Pipeline] timeout 00:12:04.270 Timeout set to expire in 40 min 00:12:04.272 [Pipeline] { 00:12:04.290 [Pipeline] sh 00:12:04.569 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:12:05.138 HEAD is now at 2728651ee accel: adjust task per ch define name 00:12:05.154 [Pipeline] sh 00:12:05.445 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:12:05.715 [Pipeline] sh 00:12:05.993 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:12:06.285 [Pipeline] sh 00:12:06.568 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:12:06.829 ++ readlink -f spdk_repo 00:12:06.829 + DIR_ROOT=/home/vagrant/spdk_repo 00:12:06.829 + [[ -n /home/vagrant/spdk_repo ]] 00:12:06.829 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:12:06.829 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:12:06.829 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:12:06.829 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:12:06.829 + [[ -d /home/vagrant/spdk_repo/output ]] 00:12:06.829 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:12:06.829 + cd /home/vagrant/spdk_repo 00:12:06.829 + source /etc/os-release 00:12:06.829 ++ NAME='Fedora Linux' 00:12:06.829 ++ VERSION='38 (Cloud Edition)' 00:12:06.829 ++ ID=fedora 00:12:06.829 ++ VERSION_ID=38 00:12:06.829 ++ VERSION_CODENAME= 00:12:06.829 ++ PLATFORM_ID=platform:f38 00:12:06.829 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:12:06.829 ++ ANSI_COLOR='0;38;2;60;110;180' 00:12:06.829 ++ LOGO=fedora-logo-icon 00:12:06.829 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:12:06.829 ++ HOME_URL=https://fedoraproject.org/ 00:12:06.829 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:12:06.829 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:12:06.829 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:12:06.829 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:12:06.829 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:12:06.829 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:12:06.829 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:12:06.829 ++ SUPPORT_END=2024-05-14 00:12:06.829 ++ VARIANT='Cloud Edition' 00:12:06.829 ++ VARIANT_ID=cloud 00:12:06.829 + uname -a 00:12:06.829 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:12:06.829 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:12:07.397 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:07.397 Hugepages 00:12:07.397 node hugesize free / total 00:12:07.397 node0 1048576kB 0 / 0 00:12:07.397 node0 2048kB 0 / 0 00:12:07.397 00:12:07.397 Type BDF Vendor Device NUMA Driver Device Block devices 00:12:07.397 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:12:07.397 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:12:07.397 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:12:07.397 + rm -f /tmp/spdk-ld-path 00:12:07.397 + source autorun-spdk.conf 00:12:07.397 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:12:07.397 ++ SPDK_TEST_NVMF=1 00:12:07.397 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:12:07.397 ++ SPDK_TEST_USDT=1 00:12:07.397 ++ SPDK_TEST_NVMF_MDNS=1 00:12:07.397 ++ SPDK_RUN_UBSAN=1 00:12:07.397 ++ NET_TYPE=virt 00:12:07.397 ++ SPDK_JSONRPC_GO_CLIENT=1 00:12:07.397 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:12:07.397 ++ RUN_NIGHTLY=0 00:12:07.397 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:12:07.397 + [[ -n '' ]] 00:12:07.397 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:12:07.397 + for M in /var/spdk/build-*-manifest.txt 00:12:07.397 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:12:07.397 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:12:07.397 + for M in /var/spdk/build-*-manifest.txt 00:12:07.397 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:12:07.397 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:12:07.397 ++ uname 00:12:07.397 + [[ Linux == \L\i\n\u\x ]] 00:12:07.397 + sudo dmesg -T 00:12:07.397 + sudo dmesg --clear 00:12:07.397 + dmesg_pid=5322 00:12:07.397 + [[ Fedora Linux == FreeBSD ]] 00:12:07.397 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:07.397 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:07.397 + sudo dmesg -Tw 00:12:07.397 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:12:07.397 + [[ -x /usr/src/fio-static/fio ]] 00:12:07.397 + export FIO_BIN=/usr/src/fio-static/fio 00:12:07.397 + FIO_BIN=/usr/src/fio-static/fio 00:12:07.397 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:12:07.397 + [[ ! -v VFIO_QEMU_BIN ]] 00:12:07.397 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:12:07.397 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:07.397 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:07.397 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:12:07.397 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:07.397 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:07.397 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:12:07.397 Test configuration: 00:12:07.397 SPDK_RUN_FUNCTIONAL_TEST=1 00:12:07.397 SPDK_TEST_NVMF=1 00:12:07.397 SPDK_TEST_NVMF_TRANSPORT=tcp 00:12:07.397 SPDK_TEST_USDT=1 00:12:07.397 SPDK_TEST_NVMF_MDNS=1 00:12:07.397 SPDK_RUN_UBSAN=1 00:12:07.397 NET_TYPE=virt 00:12:07.397 SPDK_JSONRPC_GO_CLIENT=1 00:12:07.397 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:12:07.656 RUN_NIGHTLY=0 09:53:21 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:07.656 09:53:21 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:12:07.656 09:53:21 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:07.656 09:53:21 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:07.656 09:53:21 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.656 09:53:21 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.656 09:53:21 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.656 09:53:21 -- paths/export.sh@5 -- $ export PATH 00:12:07.657 09:53:21 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.657 09:53:21 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:12:07.657 09:53:21 -- common/autobuild_common.sh@444 -- $ date +%s 00:12:07.657 09:53:21 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721037201.XXXXXX 00:12:07.657 09:53:21 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721037201.A9FBRe 00:12:07.657 09:53:21 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:12:07.657 09:53:21 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:12:07.657 09:53:21 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:12:07.657 09:53:21 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:12:07.657 09:53:21 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:12:07.657 09:53:21 -- common/autobuild_common.sh@460 -- $ get_config_params 00:12:07.657 09:53:21 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:12:07.657 09:53:21 -- common/autotest_common.sh@10 -- $ set +x 00:12:07.657 09:53:21 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:12:07.657 09:53:21 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:12:07.657 09:53:21 -- pm/common@17 -- $ local monitor 00:12:07.657 09:53:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:12:07.657 09:53:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:12:07.657 09:53:21 -- pm/common@25 -- $ sleep 1 00:12:07.657 09:53:21 -- pm/common@21 -- $ date +%s 00:12:07.657 09:53:21 -- pm/common@21 -- $ date +%s 00:12:07.657 09:53:21 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721037201 00:12:07.657 09:53:21 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721037201 00:12:07.657 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721037201_collect-vmstat.pm.log 00:12:07.657 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721037201_collect-cpu-load.pm.log 00:12:08.595 09:53:22 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:12:08.595 09:53:22 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:12:08.595 09:53:22 -- spdk/autobuild.sh@12 -- $ umask 022 00:12:08.595 09:53:22 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:12:08.595 09:53:22 -- spdk/autobuild.sh@16 -- $ date -u 00:12:08.595 Mon Jul 15 09:53:22 AM UTC 2024 00:12:08.595 09:53:22 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:12:08.595 v24.09-pre-206-g2728651ee 00:12:08.595 09:53:22 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:12:08.595 09:53:22 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:12:08.595 09:53:22 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:12:08.595 09:53:22 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:12:08.595 09:53:22 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:12:08.595 09:53:22 -- common/autotest_common.sh@10 -- $ set +x 00:12:08.595 ************************************ 00:12:08.595 START TEST ubsan 00:12:08.595 ************************************ 00:12:08.595 using ubsan 00:12:08.595 09:53:22 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:12:08.595 00:12:08.595 real 0m0.000s 00:12:08.595 user 0m0.000s 00:12:08.595 sys 0m0.000s 00:12:08.595 09:53:22 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:12:08.595 09:53:22 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:12:08.595 ************************************ 00:12:08.595 END TEST ubsan 00:12:08.595 ************************************ 00:12:08.854 09:53:22 -- common/autotest_common.sh@1142 -- $ return 0 00:12:08.854 09:53:22 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:12:08.854 09:53:22 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:12:08.854 09:53:22 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:12:08.854 09:53:22 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:12:08.854 09:53:22 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:12:08.854 09:53:22 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:12:08.854 09:53:22 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:12:08.854 09:53:22 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:12:08.854 09:53:22 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang --with-shared 00:12:08.854 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:08.854 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:12:09.424 Using 'verbs' RDMA provider 00:12:25.245 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:12:40.121 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:12:40.121 go version go1.21.1 linux/amd64 00:12:40.121 Creating mk/config.mk...done. 00:12:40.121 Creating mk/cc.flags.mk...done. 00:12:40.121 Type 'make' to build. 00:12:40.121 09:53:53 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:12:40.121 09:53:53 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:12:40.121 09:53:53 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:12:40.121 09:53:53 -- common/autotest_common.sh@10 -- $ set +x 00:12:40.121 ************************************ 00:12:40.121 START TEST make 00:12:40.121 ************************************ 00:12:40.121 09:53:53 make -- common/autotest_common.sh@1123 -- $ make -j10 00:12:40.380 make[1]: Nothing to be done for 'all'. 00:12:52.598 The Meson build system 00:12:52.598 Version: 1.3.1 00:12:52.598 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:12:52.598 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:12:52.598 Build type: native build 00:12:52.598 Program cat found: YES (/usr/bin/cat) 00:12:52.598 Project name: DPDK 00:12:52.598 Project version: 24.03.0 00:12:52.598 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:12:52.598 C linker for the host machine: cc ld.bfd 2.39-16 00:12:52.598 Host machine cpu family: x86_64 00:12:52.598 Host machine cpu: x86_64 00:12:52.598 Message: ## Building in Developer Mode ## 00:12:52.598 Program pkg-config found: YES (/usr/bin/pkg-config) 00:12:52.598 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:12:52.598 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:12:52.598 Program python3 found: YES (/usr/bin/python3) 00:12:52.598 Program cat found: YES (/usr/bin/cat) 00:12:52.598 Compiler for C supports arguments -march=native: YES 00:12:52.598 Checking for size of "void *" : 8 00:12:52.598 Checking for size of "void *" : 8 (cached) 00:12:52.598 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:12:52.598 Library m found: YES 00:12:52.598 Library numa found: YES 00:12:52.598 Has header "numaif.h" : YES 00:12:52.598 Library fdt found: NO 00:12:52.598 Library execinfo found: NO 00:12:52.598 Has header "execinfo.h" : YES 00:12:52.598 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:12:52.598 Run-time dependency libarchive found: NO (tried pkgconfig) 00:12:52.598 Run-time dependency libbsd found: NO (tried pkgconfig) 00:12:52.598 Run-time dependency jansson found: NO (tried pkgconfig) 00:12:52.598 Run-time dependency openssl found: YES 3.0.9 00:12:52.598 Run-time dependency libpcap found: YES 1.10.4 00:12:52.598 Has header "pcap.h" with dependency libpcap: YES 00:12:52.598 Compiler for C supports arguments -Wcast-qual: YES 00:12:52.598 Compiler for C supports arguments -Wdeprecated: YES 00:12:52.598 Compiler for C supports arguments -Wformat: YES 00:12:52.598 Compiler for C supports arguments -Wformat-nonliteral: NO 00:12:52.598 Compiler for C supports arguments -Wformat-security: NO 00:12:52.598 Compiler for C supports arguments -Wmissing-declarations: YES 00:12:52.598 Compiler for C supports arguments -Wmissing-prototypes: YES 00:12:52.598 Compiler for C supports arguments -Wnested-externs: YES 00:12:52.598 Compiler for C supports arguments -Wold-style-definition: YES 00:12:52.598 Compiler for C supports arguments -Wpointer-arith: YES 00:12:52.598 Compiler for C supports arguments -Wsign-compare: YES 00:12:52.598 Compiler for C supports arguments -Wstrict-prototypes: YES 00:12:52.598 Compiler for C supports arguments -Wundef: YES 00:12:52.598 Compiler for C supports arguments -Wwrite-strings: YES 00:12:52.598 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:12:52.598 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:12:52.598 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:12:52.598 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:12:52.598 Program objdump found: YES (/usr/bin/objdump) 00:12:52.598 Compiler for C supports arguments -mavx512f: YES 00:12:52.598 Checking if "AVX512 checking" compiles: YES 00:12:52.598 Fetching value of define "__SSE4_2__" : 1 00:12:52.598 Fetching value of define "__AES__" : 1 00:12:52.598 Fetching value of define "__AVX__" : 1 00:12:52.598 Fetching value of define "__AVX2__" : 1 00:12:52.598 Fetching value of define "__AVX512BW__" : 1 00:12:52.598 Fetching value of define "__AVX512CD__" : 1 00:12:52.598 Fetching value of define "__AVX512DQ__" : 1 00:12:52.598 Fetching value of define "__AVX512F__" : 1 00:12:52.599 Fetching value of define "__AVX512VL__" : 1 00:12:52.599 Fetching value of define "__PCLMUL__" : 1 00:12:52.599 Fetching value of define "__RDRND__" : 1 00:12:52.599 Fetching value of define "__RDSEED__" : 1 00:12:52.599 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:12:52.599 Fetching value of define "__znver1__" : (undefined) 00:12:52.599 Fetching value of define "__znver2__" : (undefined) 00:12:52.599 Fetching value of define "__znver3__" : (undefined) 00:12:52.599 Fetching value of define "__znver4__" : (undefined) 00:12:52.599 Compiler for C supports arguments -Wno-format-truncation: YES 00:12:52.599 Message: lib/log: Defining dependency "log" 00:12:52.599 Message: lib/kvargs: Defining dependency "kvargs" 00:12:52.599 Message: lib/telemetry: Defining dependency "telemetry" 00:12:52.599 Checking for function "getentropy" : NO 00:12:52.599 Message: lib/eal: Defining dependency "eal" 00:12:52.599 Message: lib/ring: Defining dependency "ring" 00:12:52.599 Message: lib/rcu: Defining dependency "rcu" 00:12:52.599 Message: lib/mempool: Defining dependency "mempool" 00:12:52.599 Message: lib/mbuf: Defining dependency "mbuf" 00:12:52.599 Fetching value of define "__PCLMUL__" : 1 (cached) 00:12:52.599 Fetching value of define "__AVX512F__" : 1 (cached) 00:12:52.599 Fetching value of define "__AVX512BW__" : 1 (cached) 00:12:52.599 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:12:52.599 Fetching value of define "__AVX512VL__" : 1 (cached) 00:12:52.599 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:12:52.599 Compiler for C supports arguments -mpclmul: YES 00:12:52.599 Compiler for C supports arguments -maes: YES 00:12:52.599 Compiler for C supports arguments -mavx512f: YES (cached) 00:12:52.599 Compiler for C supports arguments -mavx512bw: YES 00:12:52.599 Compiler for C supports arguments -mavx512dq: YES 00:12:52.599 Compiler for C supports arguments -mavx512vl: YES 00:12:52.599 Compiler for C supports arguments -mvpclmulqdq: YES 00:12:52.599 Compiler for C supports arguments -mavx2: YES 00:12:52.599 Compiler for C supports arguments -mavx: YES 00:12:52.599 Message: lib/net: Defining dependency "net" 00:12:52.599 Message: lib/meter: Defining dependency "meter" 00:12:52.599 Message: lib/ethdev: Defining dependency "ethdev" 00:12:52.599 Message: lib/pci: Defining dependency "pci" 00:12:52.599 Message: lib/cmdline: Defining dependency "cmdline" 00:12:52.599 Message: lib/hash: Defining dependency "hash" 00:12:52.599 Message: lib/timer: Defining dependency "timer" 00:12:52.599 Message: lib/compressdev: Defining dependency "compressdev" 00:12:52.599 Message: lib/cryptodev: Defining dependency "cryptodev" 00:12:52.599 Message: lib/dmadev: Defining dependency "dmadev" 00:12:52.599 Compiler for C supports arguments -Wno-cast-qual: YES 00:12:52.599 Message: lib/power: Defining dependency "power" 00:12:52.599 Message: lib/reorder: Defining dependency "reorder" 00:12:52.599 Message: lib/security: Defining dependency "security" 00:12:52.599 Has header "linux/userfaultfd.h" : YES 00:12:52.599 Has header "linux/vduse.h" : YES 00:12:52.599 Message: lib/vhost: Defining dependency "vhost" 00:12:52.599 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:12:52.599 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:12:52.599 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:12:52.599 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:12:52.599 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:12:52.599 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:12:52.599 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:12:52.599 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:12:52.599 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:12:52.599 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:12:52.599 Program doxygen found: YES (/usr/bin/doxygen) 00:12:52.599 Configuring doxy-api-html.conf using configuration 00:12:52.599 Configuring doxy-api-man.conf using configuration 00:12:52.599 Program mandb found: YES (/usr/bin/mandb) 00:12:52.599 Program sphinx-build found: NO 00:12:52.599 Configuring rte_build_config.h using configuration 00:12:52.599 Message: 00:12:52.599 ================= 00:12:52.599 Applications Enabled 00:12:52.599 ================= 00:12:52.599 00:12:52.599 apps: 00:12:52.599 00:12:52.599 00:12:52.599 Message: 00:12:52.599 ================= 00:12:52.599 Libraries Enabled 00:12:52.599 ================= 00:12:52.599 00:12:52.599 libs: 00:12:52.599 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:12:52.599 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:12:52.599 cryptodev, dmadev, power, reorder, security, vhost, 00:12:52.599 00:12:52.599 Message: 00:12:52.599 =============== 00:12:52.599 Drivers Enabled 00:12:52.599 =============== 00:12:52.599 00:12:52.599 common: 00:12:52.599 00:12:52.599 bus: 00:12:52.599 pci, vdev, 00:12:52.599 mempool: 00:12:52.599 ring, 00:12:52.599 dma: 00:12:52.599 00:12:52.599 net: 00:12:52.599 00:12:52.599 crypto: 00:12:52.599 00:12:52.599 compress: 00:12:52.599 00:12:52.599 vdpa: 00:12:52.599 00:12:52.599 00:12:52.599 Message: 00:12:52.599 ================= 00:12:52.599 Content Skipped 00:12:52.599 ================= 00:12:52.599 00:12:52.599 apps: 00:12:52.599 dumpcap: explicitly disabled via build config 00:12:52.599 graph: explicitly disabled via build config 00:12:52.599 pdump: explicitly disabled via build config 00:12:52.599 proc-info: explicitly disabled via build config 00:12:52.599 test-acl: explicitly disabled via build config 00:12:52.599 test-bbdev: explicitly disabled via build config 00:12:52.599 test-cmdline: explicitly disabled via build config 00:12:52.599 test-compress-perf: explicitly disabled via build config 00:12:52.599 test-crypto-perf: explicitly disabled via build config 00:12:52.599 test-dma-perf: explicitly disabled via build config 00:12:52.599 test-eventdev: explicitly disabled via build config 00:12:52.599 test-fib: explicitly disabled via build config 00:12:52.599 test-flow-perf: explicitly disabled via build config 00:12:52.599 test-gpudev: explicitly disabled via build config 00:12:52.599 test-mldev: explicitly disabled via build config 00:12:52.599 test-pipeline: explicitly disabled via build config 00:12:52.599 test-pmd: explicitly disabled via build config 00:12:52.599 test-regex: explicitly disabled via build config 00:12:52.599 test-sad: explicitly disabled via build config 00:12:52.599 test-security-perf: explicitly disabled via build config 00:12:52.599 00:12:52.599 libs: 00:12:52.599 argparse: explicitly disabled via build config 00:12:52.599 metrics: explicitly disabled via build config 00:12:52.599 acl: explicitly disabled via build config 00:12:52.599 bbdev: explicitly disabled via build config 00:12:52.599 bitratestats: explicitly disabled via build config 00:12:52.599 bpf: explicitly disabled via build config 00:12:52.599 cfgfile: explicitly disabled via build config 00:12:52.599 distributor: explicitly disabled via build config 00:12:52.599 efd: explicitly disabled via build config 00:12:52.599 eventdev: explicitly disabled via build config 00:12:52.599 dispatcher: explicitly disabled via build config 00:12:52.599 gpudev: explicitly disabled via build config 00:12:52.599 gro: explicitly disabled via build config 00:12:52.599 gso: explicitly disabled via build config 00:12:52.599 ip_frag: explicitly disabled via build config 00:12:52.599 jobstats: explicitly disabled via build config 00:12:52.599 latencystats: explicitly disabled via build config 00:12:52.599 lpm: explicitly disabled via build config 00:12:52.599 member: explicitly disabled via build config 00:12:52.599 pcapng: explicitly disabled via build config 00:12:52.599 rawdev: explicitly disabled via build config 00:12:52.599 regexdev: explicitly disabled via build config 00:12:52.599 mldev: explicitly disabled via build config 00:12:52.599 rib: explicitly disabled via build config 00:12:52.599 sched: explicitly disabled via build config 00:12:52.599 stack: explicitly disabled via build config 00:12:52.599 ipsec: explicitly disabled via build config 00:12:52.599 pdcp: explicitly disabled via build config 00:12:52.599 fib: explicitly disabled via build config 00:12:52.599 port: explicitly disabled via build config 00:12:52.599 pdump: explicitly disabled via build config 00:12:52.599 table: explicitly disabled via build config 00:12:52.599 pipeline: explicitly disabled via build config 00:12:52.599 graph: explicitly disabled via build config 00:12:52.599 node: explicitly disabled via build config 00:12:52.599 00:12:52.599 drivers: 00:12:52.599 common/cpt: not in enabled drivers build config 00:12:52.599 common/dpaax: not in enabled drivers build config 00:12:52.599 common/iavf: not in enabled drivers build config 00:12:52.599 common/idpf: not in enabled drivers build config 00:12:52.599 common/ionic: not in enabled drivers build config 00:12:52.599 common/mvep: not in enabled drivers build config 00:12:52.599 common/octeontx: not in enabled drivers build config 00:12:52.599 bus/auxiliary: not in enabled drivers build config 00:12:52.599 bus/cdx: not in enabled drivers build config 00:12:52.599 bus/dpaa: not in enabled drivers build config 00:12:52.599 bus/fslmc: not in enabled drivers build config 00:12:52.599 bus/ifpga: not in enabled drivers build config 00:12:52.599 bus/platform: not in enabled drivers build config 00:12:52.599 bus/uacce: not in enabled drivers build config 00:12:52.599 bus/vmbus: not in enabled drivers build config 00:12:52.599 common/cnxk: not in enabled drivers build config 00:12:52.599 common/mlx5: not in enabled drivers build config 00:12:52.599 common/nfp: not in enabled drivers build config 00:12:52.599 common/nitrox: not in enabled drivers build config 00:12:52.599 common/qat: not in enabled drivers build config 00:12:52.599 common/sfc_efx: not in enabled drivers build config 00:12:52.599 mempool/bucket: not in enabled drivers build config 00:12:52.599 mempool/cnxk: not in enabled drivers build config 00:12:52.599 mempool/dpaa: not in enabled drivers build config 00:12:52.599 mempool/dpaa2: not in enabled drivers build config 00:12:52.599 mempool/octeontx: not in enabled drivers build config 00:12:52.599 mempool/stack: not in enabled drivers build config 00:12:52.599 dma/cnxk: not in enabled drivers build config 00:12:52.599 dma/dpaa: not in enabled drivers build config 00:12:52.599 dma/dpaa2: not in enabled drivers build config 00:12:52.599 dma/hisilicon: not in enabled drivers build config 00:12:52.599 dma/idxd: not in enabled drivers build config 00:12:52.599 dma/ioat: not in enabled drivers build config 00:12:52.599 dma/skeleton: not in enabled drivers build config 00:12:52.599 net/af_packet: not in enabled drivers build config 00:12:52.599 net/af_xdp: not in enabled drivers build config 00:12:52.599 net/ark: not in enabled drivers build config 00:12:52.600 net/atlantic: not in enabled drivers build config 00:12:52.600 net/avp: not in enabled drivers build config 00:12:52.600 net/axgbe: not in enabled drivers build config 00:12:52.600 net/bnx2x: not in enabled drivers build config 00:12:52.600 net/bnxt: not in enabled drivers build config 00:12:52.600 net/bonding: not in enabled drivers build config 00:12:52.600 net/cnxk: not in enabled drivers build config 00:12:52.600 net/cpfl: not in enabled drivers build config 00:12:52.600 net/cxgbe: not in enabled drivers build config 00:12:52.600 net/dpaa: not in enabled drivers build config 00:12:52.600 net/dpaa2: not in enabled drivers build config 00:12:52.600 net/e1000: not in enabled drivers build config 00:12:52.600 net/ena: not in enabled drivers build config 00:12:52.600 net/enetc: not in enabled drivers build config 00:12:52.600 net/enetfec: not in enabled drivers build config 00:12:52.600 net/enic: not in enabled drivers build config 00:12:52.600 net/failsafe: not in enabled drivers build config 00:12:52.600 net/fm10k: not in enabled drivers build config 00:12:52.600 net/gve: not in enabled drivers build config 00:12:52.600 net/hinic: not in enabled drivers build config 00:12:52.600 net/hns3: not in enabled drivers build config 00:12:52.600 net/i40e: not in enabled drivers build config 00:12:52.600 net/iavf: not in enabled drivers build config 00:12:52.600 net/ice: not in enabled drivers build config 00:12:52.600 net/idpf: not in enabled drivers build config 00:12:52.600 net/igc: not in enabled drivers build config 00:12:52.600 net/ionic: not in enabled drivers build config 00:12:52.600 net/ipn3ke: not in enabled drivers build config 00:12:52.600 net/ixgbe: not in enabled drivers build config 00:12:52.600 net/mana: not in enabled drivers build config 00:12:52.600 net/memif: not in enabled drivers build config 00:12:52.600 net/mlx4: not in enabled drivers build config 00:12:52.600 net/mlx5: not in enabled drivers build config 00:12:52.600 net/mvneta: not in enabled drivers build config 00:12:52.600 net/mvpp2: not in enabled drivers build config 00:12:52.600 net/netvsc: not in enabled drivers build config 00:12:52.600 net/nfb: not in enabled drivers build config 00:12:52.600 net/nfp: not in enabled drivers build config 00:12:52.600 net/ngbe: not in enabled drivers build config 00:12:52.600 net/null: not in enabled drivers build config 00:12:52.600 net/octeontx: not in enabled drivers build config 00:12:52.600 net/octeon_ep: not in enabled drivers build config 00:12:52.600 net/pcap: not in enabled drivers build config 00:12:52.600 net/pfe: not in enabled drivers build config 00:12:52.600 net/qede: not in enabled drivers build config 00:12:52.600 net/ring: not in enabled drivers build config 00:12:52.600 net/sfc: not in enabled drivers build config 00:12:52.600 net/softnic: not in enabled drivers build config 00:12:52.600 net/tap: not in enabled drivers build config 00:12:52.600 net/thunderx: not in enabled drivers build config 00:12:52.600 net/txgbe: not in enabled drivers build config 00:12:52.600 net/vdev_netvsc: not in enabled drivers build config 00:12:52.600 net/vhost: not in enabled drivers build config 00:12:52.600 net/virtio: not in enabled drivers build config 00:12:52.600 net/vmxnet3: not in enabled drivers build config 00:12:52.600 raw/*: missing internal dependency, "rawdev" 00:12:52.600 crypto/armv8: not in enabled drivers build config 00:12:52.600 crypto/bcmfs: not in enabled drivers build config 00:12:52.600 crypto/caam_jr: not in enabled drivers build config 00:12:52.600 crypto/ccp: not in enabled drivers build config 00:12:52.600 crypto/cnxk: not in enabled drivers build config 00:12:52.600 crypto/dpaa_sec: not in enabled drivers build config 00:12:52.600 crypto/dpaa2_sec: not in enabled drivers build config 00:12:52.600 crypto/ipsec_mb: not in enabled drivers build config 00:12:52.600 crypto/mlx5: not in enabled drivers build config 00:12:52.600 crypto/mvsam: not in enabled drivers build config 00:12:52.600 crypto/nitrox: not in enabled drivers build config 00:12:52.600 crypto/null: not in enabled drivers build config 00:12:52.600 crypto/octeontx: not in enabled drivers build config 00:12:52.600 crypto/openssl: not in enabled drivers build config 00:12:52.600 crypto/scheduler: not in enabled drivers build config 00:12:52.600 crypto/uadk: not in enabled drivers build config 00:12:52.600 crypto/virtio: not in enabled drivers build config 00:12:52.600 compress/isal: not in enabled drivers build config 00:12:52.600 compress/mlx5: not in enabled drivers build config 00:12:52.600 compress/nitrox: not in enabled drivers build config 00:12:52.600 compress/octeontx: not in enabled drivers build config 00:12:52.600 compress/zlib: not in enabled drivers build config 00:12:52.600 regex/*: missing internal dependency, "regexdev" 00:12:52.600 ml/*: missing internal dependency, "mldev" 00:12:52.600 vdpa/ifc: not in enabled drivers build config 00:12:52.600 vdpa/mlx5: not in enabled drivers build config 00:12:52.600 vdpa/nfp: not in enabled drivers build config 00:12:52.600 vdpa/sfc: not in enabled drivers build config 00:12:52.600 event/*: missing internal dependency, "eventdev" 00:12:52.600 baseband/*: missing internal dependency, "bbdev" 00:12:52.600 gpu/*: missing internal dependency, "gpudev" 00:12:52.600 00:12:52.600 00:12:52.600 Build targets in project: 85 00:12:52.600 00:12:52.600 DPDK 24.03.0 00:12:52.600 00:12:52.600 User defined options 00:12:52.600 buildtype : debug 00:12:52.600 default_library : shared 00:12:52.600 libdir : lib 00:12:52.600 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:12:52.600 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:12:52.600 c_link_args : 00:12:52.600 cpu_instruction_set: native 00:12:52.600 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:12:52.600 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:12:52.600 enable_docs : false 00:12:52.600 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:12:52.600 enable_kmods : false 00:12:52.600 max_lcores : 128 00:12:52.600 tests : false 00:12:52.600 00:12:52.600 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:12:52.600 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:12:52.600 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:12:52.600 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:12:52.600 [3/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:12:52.600 [4/268] Linking static target lib/librte_kvargs.a 00:12:52.600 [5/268] Linking static target lib/librte_log.a 00:12:52.600 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:12:52.600 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:12:52.600 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:12:52.600 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:12:52.600 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:12:52.600 [11/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:12:52.600 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:12:52.600 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:12:52.861 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:12:52.861 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:12:52.861 [16/268] Linking static target lib/librte_telemetry.a 00:12:52.861 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:12:52.861 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:12:53.124 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:12:53.390 [20/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:12:53.390 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:12:53.390 [22/268] Linking target lib/librte_log.so.24.1 00:12:53.390 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:12:53.390 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:12:53.390 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:12:53.390 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:12:53.390 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:12:53.658 [28/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:12:53.658 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:12:53.658 [30/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:12:53.658 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:12:53.658 [32/268] Linking target lib/librte_kvargs.so.24.1 00:12:53.658 [33/268] Linking target lib/librte_telemetry.so.24.1 00:12:53.659 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:12:53.920 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:12:53.920 [36/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:12:53.920 [37/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:12:53.920 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:12:53.920 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:12:53.921 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:12:53.921 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:12:54.180 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:12:54.180 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:12:54.180 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:12:54.180 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:12:54.180 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:12:54.439 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:12:54.439 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:12:54.439 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:12:54.439 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:12:54.439 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:12:54.849 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:12:54.849 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:12:54.849 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:12:54.849 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:12:54.849 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:12:54.849 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:12:55.107 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:12:55.107 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:12:55.107 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:12:55.107 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:12:55.107 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:12:55.107 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:12:55.107 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:12:55.107 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:12:55.364 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:12:55.622 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:12:55.622 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:12:55.622 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:12:55.622 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:12:55.622 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:12:55.622 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:12:55.881 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:12:55.881 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:12:55.881 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:12:55.881 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:12:55.881 [77/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:12:55.881 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:12:55.881 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:12:56.139 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:12:56.139 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:12:56.397 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:12:56.397 [83/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:12:56.397 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:12:56.397 [85/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:12:56.397 [86/268] Linking static target lib/librte_ring.a 00:12:56.397 [87/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:12:56.656 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:12:56.656 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:12:56.656 [90/268] Linking static target lib/librte_eal.a 00:12:56.656 [91/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:12:56.656 [92/268] Linking static target lib/librte_rcu.a 00:12:56.656 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:12:56.916 [94/268] Linking static target lib/librte_mempool.a 00:12:56.916 [95/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:12:56.916 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:12:56.916 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:12:57.175 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:12:57.175 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:12:57.175 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:12:57.175 [101/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:12:57.435 [102/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:12:57.435 [103/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:12:57.435 [104/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:12:57.435 [105/268] Linking static target lib/librte_mbuf.a 00:12:57.435 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:12:57.694 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:12:57.694 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:12:57.694 [109/268] Linking static target lib/librte_meter.a 00:12:57.694 [110/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:12:57.694 [111/268] Linking static target lib/librte_net.a 00:12:57.952 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:12:58.210 [113/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:12:58.210 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:12:58.210 [115/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:12:58.210 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:12:58.210 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:12:58.469 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:12:58.469 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:12:58.728 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:12:58.728 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:12:58.728 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:12:58.987 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:12:59.246 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:12:59.247 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:12:59.247 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:12:59.247 [127/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:12:59.247 [128/268] Linking static target lib/librte_pci.a 00:12:59.247 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:12:59.247 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:12:59.247 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:12:59.505 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:12:59.506 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:12:59.506 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:12:59.506 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:12:59.506 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:12:59.506 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:12:59.506 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:12:59.506 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:12:59.506 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:12:59.506 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:12:59.506 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:12:59.506 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:12:59.506 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:12:59.765 [145/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:12:59.765 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:12:59.765 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:12:59.765 [148/268] Linking static target lib/librte_cmdline.a 00:13:00.027 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:13:00.027 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:13:00.027 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:13:00.027 [152/268] Linking static target lib/librte_ethdev.a 00:13:00.285 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:13:00.285 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:13:00.285 [155/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:13:00.285 [156/268] Linking static target lib/librte_timer.a 00:13:00.285 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:13:00.285 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:13:00.285 [159/268] Linking static target lib/librte_hash.a 00:13:00.285 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:13:00.285 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:13:00.285 [162/268] Linking static target lib/librte_compressdev.a 00:13:00.851 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:13:00.851 [164/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:13:00.851 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:13:00.851 [166/268] Linking static target lib/librte_dmadev.a 00:13:00.851 [167/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:13:00.851 [168/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:13:00.851 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:13:01.110 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:13:01.110 [171/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:13:01.110 [172/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:13:01.110 [173/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:13:01.110 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:01.110 [175/268] Linking static target lib/librte_cryptodev.a 00:13:01.368 [176/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:13:01.368 [177/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:13:01.368 [178/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:13:01.627 [179/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:01.627 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:13:01.627 [181/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:13:01.627 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:13:01.627 [183/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:13:01.627 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:13:01.885 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:13:01.885 [186/268] Linking static target lib/librte_power.a 00:13:01.885 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:13:01.885 [188/268] Linking static target lib/librte_reorder.a 00:13:02.143 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:13:02.143 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:13:02.401 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:13:02.401 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:13:02.401 [193/268] Linking static target lib/librte_security.a 00:13:02.401 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:13:02.401 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:13:02.969 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:13:02.969 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:13:03.227 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:13:03.228 [199/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:13:03.228 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:13:03.228 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:13:03.485 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:13:03.485 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:13:03.485 [204/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:03.485 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:13:03.743 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:13:03.743 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:13:03.743 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:13:03.743 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:13:04.000 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:13:04.000 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:13:04.000 [212/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:13:04.000 [213/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:13:04.000 [214/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:13:04.000 [215/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:13:04.000 [216/268] Linking static target drivers/librte_bus_pci.a 00:13:04.000 [217/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:13:04.001 [218/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:13:04.001 [219/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:13:04.001 [220/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:13:04.001 [221/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:13:04.259 [222/268] Linking static target drivers/librte_bus_vdev.a 00:13:04.259 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:13:04.259 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:13:04.259 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:13:04.259 [226/268] Linking static target drivers/librte_mempool_ring.a 00:13:04.259 [227/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:04.518 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:13:05.455 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:13:05.455 [230/268] Linking static target lib/librte_vhost.a 00:13:07.367 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:13:07.626 [232/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:13:07.626 [233/268] Linking target lib/librte_eal.so.24.1 00:13:07.886 [234/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:13:07.886 [235/268] Linking target lib/librte_pci.so.24.1 00:13:07.886 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:13:07.886 [237/268] Linking target lib/librte_meter.so.24.1 00:13:07.886 [238/268] Linking target lib/librte_ring.so.24.1 00:13:07.886 [239/268] Linking target lib/librte_timer.so.24.1 00:13:07.886 [240/268] Linking target lib/librte_dmadev.so.24.1 00:13:07.886 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:13:07.886 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:13:07.886 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:13:07.886 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:13:07.886 [245/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:13:08.144 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:13:08.144 [247/268] Linking target lib/librte_mempool.so.24.1 00:13:08.144 [248/268] Linking target lib/librte_rcu.so.24.1 00:13:08.144 [249/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:13:08.144 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:13:08.144 [251/268] Linking target lib/librte_mbuf.so.24.1 00:13:08.144 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:13:08.404 [253/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:13:08.404 [254/268] Linking target lib/librte_reorder.so.24.1 00:13:08.404 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:13:08.404 [256/268] Linking target lib/librte_net.so.24.1 00:13:08.404 [257/268] Linking target lib/librte_compressdev.so.24.1 00:13:08.662 [258/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:13:08.662 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:13:08.663 [260/268] Linking target lib/librte_security.so.24.1 00:13:08.663 [261/268] Linking target lib/librte_hash.so.24.1 00:13:08.663 [262/268] Linking target lib/librte_cmdline.so.24.1 00:13:08.922 [263/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:13:09.180 [264/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:09.180 [265/268] Linking target lib/librte_ethdev.so.24.1 00:13:09.439 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:13:09.439 [267/268] Linking target lib/librte_power.so.24.1 00:13:09.439 [268/268] Linking target lib/librte_vhost.so.24.1 00:13:09.439 INFO: autodetecting backend as ninja 00:13:09.439 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:13:10.821 CC lib/ut/ut.o 00:13:10.821 CC lib/ut_mock/mock.o 00:13:10.821 CC lib/log/log.o 00:13:10.821 CC lib/log/log_flags.o 00:13:10.821 CC lib/log/log_deprecated.o 00:13:10.821 LIB libspdk_ut.a 00:13:10.821 LIB libspdk_ut_mock.a 00:13:10.821 SO libspdk_ut.so.2.0 00:13:10.821 SO libspdk_ut_mock.so.6.0 00:13:10.821 LIB libspdk_log.a 00:13:11.079 SYMLINK libspdk_ut.so 00:13:11.079 SYMLINK libspdk_ut_mock.so 00:13:11.079 SO libspdk_log.so.7.0 00:13:11.079 SYMLINK libspdk_log.so 00:13:11.337 CXX lib/trace_parser/trace.o 00:13:11.337 CC lib/util/base64.o 00:13:11.337 CC lib/util/bit_array.o 00:13:11.337 CC lib/util/crc16.o 00:13:11.337 CC lib/util/cpuset.o 00:13:11.337 CC lib/util/crc32.o 00:13:11.337 CC lib/dma/dma.o 00:13:11.337 CC lib/util/crc32c.o 00:13:11.337 CC lib/ioat/ioat.o 00:13:11.337 CC lib/vfio_user/host/vfio_user_pci.o 00:13:11.337 CC lib/util/crc32_ieee.o 00:13:11.594 CC lib/util/crc64.o 00:13:11.594 CC lib/util/dif.o 00:13:11.594 CC lib/vfio_user/host/vfio_user.o 00:13:11.594 CC lib/util/fd.o 00:13:11.594 LIB libspdk_dma.a 00:13:11.594 CC lib/util/file.o 00:13:11.594 CC lib/util/hexlify.o 00:13:11.594 CC lib/util/iov.o 00:13:11.594 SO libspdk_dma.so.4.0 00:13:11.594 LIB libspdk_ioat.a 00:13:11.594 CC lib/util/math.o 00:13:11.594 SYMLINK libspdk_dma.so 00:13:11.594 CC lib/util/pipe.o 00:13:11.594 SO libspdk_ioat.so.7.0 00:13:11.594 LIB libspdk_vfio_user.a 00:13:11.594 CC lib/util/strerror_tls.o 00:13:11.851 SO libspdk_vfio_user.so.5.0 00:13:11.851 SYMLINK libspdk_ioat.so 00:13:11.851 CC lib/util/string.o 00:13:11.851 CC lib/util/uuid.o 00:13:11.851 CC lib/util/fd_group.o 00:13:11.851 CC lib/util/xor.o 00:13:11.851 SYMLINK libspdk_vfio_user.so 00:13:11.851 CC lib/util/zipf.o 00:13:12.109 LIB libspdk_util.a 00:13:12.109 SO libspdk_util.so.9.1 00:13:12.109 LIB libspdk_trace_parser.a 00:13:12.109 SYMLINK libspdk_util.so 00:13:12.367 SO libspdk_trace_parser.so.5.0 00:13:12.367 SYMLINK libspdk_trace_parser.so 00:13:12.367 CC lib/rdma_utils/rdma_utils.o 00:13:12.367 CC lib/vmd/vmd.o 00:13:12.367 CC lib/vmd/led.o 00:13:12.367 CC lib/json/json_util.o 00:13:12.367 CC lib/json/json_parse.o 00:13:12.367 CC lib/json/json_write.o 00:13:12.367 CC lib/env_dpdk/env.o 00:13:12.367 CC lib/idxd/idxd.o 00:13:12.367 CC lib/rdma_provider/common.o 00:13:12.367 CC lib/conf/conf.o 00:13:12.626 CC lib/env_dpdk/memory.o 00:13:12.626 CC lib/rdma_provider/rdma_provider_verbs.o 00:13:12.626 CC lib/env_dpdk/pci.o 00:13:12.626 CC lib/idxd/idxd_user.o 00:13:12.626 LIB libspdk_conf.a 00:13:12.626 LIB libspdk_rdma_utils.a 00:13:12.626 SO libspdk_conf.so.6.0 00:13:12.626 LIB libspdk_json.a 00:13:12.626 SO libspdk_rdma_utils.so.1.0 00:13:12.626 SO libspdk_json.so.6.0 00:13:12.626 SYMLINK libspdk_conf.so 00:13:12.626 CC lib/env_dpdk/init.o 00:13:12.626 SYMLINK libspdk_rdma_utils.so 00:13:12.626 CC lib/env_dpdk/threads.o 00:13:12.884 SYMLINK libspdk_json.so 00:13:12.884 CC lib/env_dpdk/pci_ioat.o 00:13:12.884 LIB libspdk_rdma_provider.a 00:13:12.884 SO libspdk_rdma_provider.so.6.0 00:13:12.884 CC lib/env_dpdk/pci_virtio.o 00:13:12.884 SYMLINK libspdk_rdma_provider.so 00:13:12.884 CC lib/env_dpdk/pci_vmd.o 00:13:12.884 CC lib/env_dpdk/pci_idxd.o 00:13:12.884 CC lib/env_dpdk/pci_event.o 00:13:12.884 CC lib/idxd/idxd_kernel.o 00:13:12.884 CC lib/env_dpdk/sigbus_handler.o 00:13:12.884 CC lib/env_dpdk/pci_dpdk.o 00:13:12.884 CC lib/env_dpdk/pci_dpdk_2207.o 00:13:12.884 LIB libspdk_vmd.a 00:13:12.884 CC lib/env_dpdk/pci_dpdk_2211.o 00:13:13.141 SO libspdk_vmd.so.6.0 00:13:13.141 LIB libspdk_idxd.a 00:13:13.141 SYMLINK libspdk_vmd.so 00:13:13.141 SO libspdk_idxd.so.12.0 00:13:13.141 SYMLINK libspdk_idxd.so 00:13:13.400 CC lib/jsonrpc/jsonrpc_client.o 00:13:13.400 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:13:13.400 CC lib/jsonrpc/jsonrpc_server.o 00:13:13.400 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:13:13.400 LIB libspdk_jsonrpc.a 00:13:13.680 SO libspdk_jsonrpc.so.6.0 00:13:13.680 SYMLINK libspdk_jsonrpc.so 00:13:13.680 LIB libspdk_env_dpdk.a 00:13:13.938 SO libspdk_env_dpdk.so.14.1 00:13:13.938 SYMLINK libspdk_env_dpdk.so 00:13:13.938 CC lib/rpc/rpc.o 00:13:14.196 LIB libspdk_rpc.a 00:13:14.196 SO libspdk_rpc.so.6.0 00:13:14.196 SYMLINK libspdk_rpc.so 00:13:14.763 CC lib/notify/notify_rpc.o 00:13:14.763 CC lib/notify/notify.o 00:13:14.763 CC lib/trace/trace.o 00:13:14.763 CC lib/trace/trace_flags.o 00:13:14.763 CC lib/keyring/keyring.o 00:13:14.763 CC lib/trace/trace_rpc.o 00:13:14.763 CC lib/keyring/keyring_rpc.o 00:13:14.763 LIB libspdk_notify.a 00:13:14.763 SO libspdk_notify.so.6.0 00:13:14.764 LIB libspdk_keyring.a 00:13:15.023 LIB libspdk_trace.a 00:13:15.023 SYMLINK libspdk_notify.so 00:13:15.023 SO libspdk_keyring.so.1.0 00:13:15.023 SO libspdk_trace.so.10.0 00:13:15.023 SYMLINK libspdk_keyring.so 00:13:15.023 SYMLINK libspdk_trace.so 00:13:15.282 CC lib/thread/thread.o 00:13:15.282 CC lib/thread/iobuf.o 00:13:15.282 CC lib/sock/sock.o 00:13:15.282 CC lib/sock/sock_rpc.o 00:13:15.850 LIB libspdk_sock.a 00:13:15.850 SO libspdk_sock.so.10.0 00:13:15.850 SYMLINK libspdk_sock.so 00:13:16.440 CC lib/nvme/nvme_ctrlr_cmd.o 00:13:16.440 CC lib/nvme/nvme_ctrlr.o 00:13:16.440 CC lib/nvme/nvme_fabric.o 00:13:16.440 CC lib/nvme/nvme_ns_cmd.o 00:13:16.440 CC lib/nvme/nvme_ns.o 00:13:16.440 CC lib/nvme/nvme_pcie_common.o 00:13:16.440 CC lib/nvme/nvme_pcie.o 00:13:16.440 CC lib/nvme/nvme.o 00:13:16.440 CC lib/nvme/nvme_qpair.o 00:13:16.706 LIB libspdk_thread.a 00:13:16.706 SO libspdk_thread.so.10.1 00:13:16.706 SYMLINK libspdk_thread.so 00:13:16.706 CC lib/nvme/nvme_quirks.o 00:13:16.965 CC lib/nvme/nvme_transport.o 00:13:16.965 CC lib/nvme/nvme_discovery.o 00:13:16.965 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:13:16.965 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:13:17.224 CC lib/nvme/nvme_tcp.o 00:13:17.224 CC lib/nvme/nvme_opal.o 00:13:17.224 CC lib/nvme/nvme_io_msg.o 00:13:17.482 CC lib/nvme/nvme_poll_group.o 00:13:17.482 CC lib/accel/accel.o 00:13:17.740 CC lib/nvme/nvme_zns.o 00:13:17.740 CC lib/nvme/nvme_stubs.o 00:13:17.740 CC lib/blob/blobstore.o 00:13:17.740 CC lib/init/json_config.o 00:13:17.740 CC lib/virtio/virtio.o 00:13:17.740 CC lib/virtio/virtio_vhost_user.o 00:13:17.998 CC lib/virtio/virtio_vfio_user.o 00:13:17.998 CC lib/init/subsystem.o 00:13:17.998 CC lib/init/subsystem_rpc.o 00:13:17.998 CC lib/nvme/nvme_auth.o 00:13:17.998 CC lib/nvme/nvme_cuse.o 00:13:17.998 CC lib/virtio/virtio_pci.o 00:13:18.257 CC lib/init/rpc.o 00:13:18.257 CC lib/accel/accel_rpc.o 00:13:18.257 CC lib/accel/accel_sw.o 00:13:18.257 LIB libspdk_init.a 00:13:18.257 CC lib/blob/request.o 00:13:18.257 SO libspdk_init.so.5.0 00:13:18.257 CC lib/blob/zeroes.o 00:13:18.517 CC lib/nvme/nvme_rdma.o 00:13:18.517 LIB libspdk_virtio.a 00:13:18.517 SO libspdk_virtio.so.7.0 00:13:18.517 SYMLINK libspdk_init.so 00:13:18.517 CC lib/blob/blob_bs_dev.o 00:13:18.517 LIB libspdk_accel.a 00:13:18.517 SYMLINK libspdk_virtio.so 00:13:18.517 SO libspdk_accel.so.15.1 00:13:18.517 SYMLINK libspdk_accel.so 00:13:18.776 CC lib/event/reactor.o 00:13:18.776 CC lib/event/app.o 00:13:18.776 CC lib/event/app_rpc.o 00:13:18.776 CC lib/event/log_rpc.o 00:13:18.776 CC lib/event/scheduler_static.o 00:13:18.776 CC lib/bdev/bdev.o 00:13:18.776 CC lib/bdev/bdev_rpc.o 00:13:18.776 CC lib/bdev/bdev_zone.o 00:13:19.035 CC lib/bdev/part.o 00:13:19.035 CC lib/bdev/scsi_nvme.o 00:13:19.035 LIB libspdk_event.a 00:13:19.294 SO libspdk_event.so.14.0 00:13:19.294 SYMLINK libspdk_event.so 00:13:19.554 LIB libspdk_nvme.a 00:13:19.814 SO libspdk_nvme.so.13.1 00:13:20.072 SYMLINK libspdk_nvme.so 00:13:20.331 LIB libspdk_blob.a 00:13:20.590 SO libspdk_blob.so.11.0 00:13:20.590 SYMLINK libspdk_blob.so 00:13:21.158 CC lib/blobfs/blobfs.o 00:13:21.158 CC lib/blobfs/tree.o 00:13:21.158 CC lib/lvol/lvol.o 00:13:21.158 LIB libspdk_bdev.a 00:13:21.158 SO libspdk_bdev.so.15.1 00:13:21.158 SYMLINK libspdk_bdev.so 00:13:21.418 CC lib/ftl/ftl_core.o 00:13:21.418 CC lib/ftl/ftl_init.o 00:13:21.418 CC lib/ftl/ftl_layout.o 00:13:21.418 CC lib/ftl/ftl_debug.o 00:13:21.418 CC lib/nbd/nbd.o 00:13:21.418 CC lib/scsi/dev.o 00:13:21.418 CC lib/nvmf/ctrlr.o 00:13:21.418 CC lib/ublk/ublk.o 00:13:21.676 CC lib/ublk/ublk_rpc.o 00:13:21.676 LIB libspdk_blobfs.a 00:13:21.676 SO libspdk_blobfs.so.10.0 00:13:21.676 CC lib/scsi/lun.o 00:13:21.676 CC lib/scsi/port.o 00:13:21.676 CC lib/ftl/ftl_io.o 00:13:21.933 SYMLINK libspdk_blobfs.so 00:13:21.933 CC lib/ftl/ftl_sb.o 00:13:21.933 LIB libspdk_lvol.a 00:13:21.933 SO libspdk_lvol.so.10.0 00:13:21.933 CC lib/nbd/nbd_rpc.o 00:13:21.933 CC lib/ftl/ftl_l2p.o 00:13:21.933 CC lib/scsi/scsi.o 00:13:21.933 CC lib/scsi/scsi_bdev.o 00:13:21.933 SYMLINK libspdk_lvol.so 00:13:21.933 CC lib/nvmf/ctrlr_discovery.o 00:13:21.933 CC lib/ftl/ftl_l2p_flat.o 00:13:21.933 CC lib/scsi/scsi_pr.o 00:13:21.933 LIB libspdk_nbd.a 00:13:21.933 CC lib/scsi/scsi_rpc.o 00:13:21.933 CC lib/ftl/ftl_nv_cache.o 00:13:22.191 SO libspdk_nbd.so.7.0 00:13:22.191 CC lib/nvmf/ctrlr_bdev.o 00:13:22.191 SYMLINK libspdk_nbd.so 00:13:22.191 LIB libspdk_ublk.a 00:13:22.191 CC lib/nvmf/subsystem.o 00:13:22.191 SO libspdk_ublk.so.3.0 00:13:22.191 CC lib/nvmf/nvmf.o 00:13:22.191 CC lib/nvmf/nvmf_rpc.o 00:13:22.191 SYMLINK libspdk_ublk.so 00:13:22.191 CC lib/nvmf/transport.o 00:13:22.451 CC lib/scsi/task.o 00:13:22.451 CC lib/nvmf/tcp.o 00:13:22.451 CC lib/nvmf/stubs.o 00:13:22.451 LIB libspdk_scsi.a 00:13:22.713 SO libspdk_scsi.so.9.0 00:13:22.713 SYMLINK libspdk_scsi.so 00:13:22.713 CC lib/nvmf/mdns_server.o 00:13:22.713 CC lib/nvmf/rdma.o 00:13:22.972 CC lib/nvmf/auth.o 00:13:22.972 CC lib/iscsi/conn.o 00:13:22.972 CC lib/ftl/ftl_band.o 00:13:22.972 CC lib/ftl/ftl_band_ops.o 00:13:22.972 CC lib/iscsi/init_grp.o 00:13:23.230 CC lib/iscsi/iscsi.o 00:13:23.230 CC lib/vhost/vhost.o 00:13:23.230 CC lib/vhost/vhost_rpc.o 00:13:23.230 CC lib/ftl/ftl_writer.o 00:13:23.230 CC lib/ftl/ftl_rq.o 00:13:23.489 CC lib/ftl/ftl_reloc.o 00:13:23.489 CC lib/iscsi/md5.o 00:13:23.489 CC lib/ftl/ftl_l2p_cache.o 00:13:23.489 CC lib/vhost/vhost_scsi.o 00:13:23.748 CC lib/iscsi/param.o 00:13:23.748 CC lib/ftl/ftl_p2l.o 00:13:23.748 CC lib/iscsi/portal_grp.o 00:13:23.748 CC lib/vhost/vhost_blk.o 00:13:23.748 CC lib/vhost/rte_vhost_user.o 00:13:24.006 CC lib/ftl/mngt/ftl_mngt.o 00:13:24.006 CC lib/iscsi/tgt_node.o 00:13:24.006 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:13:24.006 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:13:24.006 CC lib/ftl/mngt/ftl_mngt_startup.o 00:13:24.265 CC lib/ftl/mngt/ftl_mngt_md.o 00:13:24.265 CC lib/ftl/mngt/ftl_mngt_misc.o 00:13:24.265 CC lib/iscsi/iscsi_subsystem.o 00:13:24.265 CC lib/iscsi/iscsi_rpc.o 00:13:24.524 CC lib/iscsi/task.o 00:13:24.524 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:13:24.524 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:13:24.524 CC lib/ftl/mngt/ftl_mngt_band.o 00:13:24.524 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:13:24.524 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:13:24.524 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:13:24.524 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:13:24.524 CC lib/ftl/utils/ftl_conf.o 00:13:24.524 LIB libspdk_iscsi.a 00:13:24.783 CC lib/ftl/utils/ftl_md.o 00:13:24.783 SO libspdk_iscsi.so.8.0 00:13:24.783 CC lib/ftl/utils/ftl_mempool.o 00:13:24.783 LIB libspdk_nvmf.a 00:13:24.783 CC lib/ftl/utils/ftl_bitmap.o 00:13:24.783 CC lib/ftl/utils/ftl_property.o 00:13:24.783 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:13:24.783 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:13:24.783 LIB libspdk_vhost.a 00:13:25.042 SYMLINK libspdk_iscsi.so 00:13:25.042 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:13:25.042 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:13:25.042 SO libspdk_nvmf.so.18.1 00:13:25.042 SO libspdk_vhost.so.8.0 00:13:25.042 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:13:25.042 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:13:25.042 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:13:25.042 SYMLINK libspdk_vhost.so 00:13:25.042 CC lib/ftl/upgrade/ftl_sb_v3.o 00:13:25.042 CC lib/ftl/upgrade/ftl_sb_v5.o 00:13:25.042 CC lib/ftl/nvc/ftl_nvc_dev.o 00:13:25.042 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:13:25.304 SYMLINK libspdk_nvmf.so 00:13:25.304 CC lib/ftl/base/ftl_base_dev.o 00:13:25.304 CC lib/ftl/base/ftl_base_bdev.o 00:13:25.304 CC lib/ftl/ftl_trace.o 00:13:25.562 LIB libspdk_ftl.a 00:13:25.562 SO libspdk_ftl.so.9.0 00:13:26.131 SYMLINK libspdk_ftl.so 00:13:26.388 CC module/env_dpdk/env_dpdk_rpc.o 00:13:26.388 CC module/keyring/file/keyring.o 00:13:26.388 CC module/accel/ioat/accel_ioat.o 00:13:26.388 CC module/accel/error/accel_error.o 00:13:26.388 CC module/keyring/linux/keyring.o 00:13:26.388 CC module/accel/iaa/accel_iaa.o 00:13:26.388 CC module/sock/posix/posix.o 00:13:26.388 CC module/accel/dsa/accel_dsa.o 00:13:26.388 CC module/scheduler/dynamic/scheduler_dynamic.o 00:13:26.388 CC module/blob/bdev/blob_bdev.o 00:13:26.388 LIB libspdk_env_dpdk_rpc.a 00:13:26.646 SO libspdk_env_dpdk_rpc.so.6.0 00:13:26.646 CC module/keyring/file/keyring_rpc.o 00:13:26.646 SYMLINK libspdk_env_dpdk_rpc.so 00:13:26.646 CC module/keyring/linux/keyring_rpc.o 00:13:26.646 CC module/accel/error/accel_error_rpc.o 00:13:26.646 CC module/accel/ioat/accel_ioat_rpc.o 00:13:26.646 CC module/accel/iaa/accel_iaa_rpc.o 00:13:26.646 LIB libspdk_scheduler_dynamic.a 00:13:26.646 SO libspdk_scheduler_dynamic.so.4.0 00:13:26.646 CC module/accel/dsa/accel_dsa_rpc.o 00:13:26.646 LIB libspdk_keyring_file.a 00:13:26.646 LIB libspdk_blob_bdev.a 00:13:26.646 LIB libspdk_keyring_linux.a 00:13:26.646 SO libspdk_keyring_file.so.1.0 00:13:26.647 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:13:26.647 SO libspdk_blob_bdev.so.11.0 00:13:26.647 SYMLINK libspdk_scheduler_dynamic.so 00:13:26.647 LIB libspdk_accel_error.a 00:13:26.647 SO libspdk_keyring_linux.so.1.0 00:13:26.647 LIB libspdk_accel_ioat.a 00:13:26.647 LIB libspdk_accel_iaa.a 00:13:26.905 SO libspdk_accel_error.so.2.0 00:13:26.905 SO libspdk_accel_ioat.so.6.0 00:13:26.905 SYMLINK libspdk_keyring_file.so 00:13:26.905 SYMLINK libspdk_keyring_linux.so 00:13:26.905 SYMLINK libspdk_blob_bdev.so 00:13:26.905 SO libspdk_accel_iaa.so.3.0 00:13:26.905 LIB libspdk_accel_dsa.a 00:13:26.905 SYMLINK libspdk_accel_error.so 00:13:26.905 SYMLINK libspdk_accel_ioat.so 00:13:26.905 SO libspdk_accel_dsa.so.5.0 00:13:26.905 SYMLINK libspdk_accel_iaa.so 00:13:26.905 LIB libspdk_scheduler_dpdk_governor.a 00:13:26.905 SYMLINK libspdk_accel_dsa.so 00:13:26.905 CC module/scheduler/gscheduler/gscheduler.o 00:13:26.905 SO libspdk_scheduler_dpdk_governor.so.4.0 00:13:26.905 SYMLINK libspdk_scheduler_dpdk_governor.so 00:13:27.163 CC module/bdev/malloc/bdev_malloc.o 00:13:27.163 CC module/bdev/error/vbdev_error.o 00:13:27.163 CC module/blobfs/bdev/blobfs_bdev.o 00:13:27.163 CC module/bdev/lvol/vbdev_lvol.o 00:13:27.163 CC module/bdev/delay/vbdev_delay.o 00:13:27.163 CC module/bdev/gpt/gpt.o 00:13:27.163 CC module/bdev/null/bdev_null.o 00:13:27.163 LIB libspdk_scheduler_gscheduler.a 00:13:27.163 LIB libspdk_sock_posix.a 00:13:27.163 SO libspdk_scheduler_gscheduler.so.4.0 00:13:27.163 SO libspdk_sock_posix.so.6.0 00:13:27.163 CC module/bdev/nvme/bdev_nvme.o 00:13:27.163 SYMLINK libspdk_scheduler_gscheduler.so 00:13:27.163 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:13:27.163 CC module/bdev/null/bdev_null_rpc.o 00:13:27.163 SYMLINK libspdk_sock_posix.so 00:13:27.422 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:13:27.422 CC module/bdev/gpt/vbdev_gpt.o 00:13:27.422 CC module/bdev/delay/vbdev_delay_rpc.o 00:13:27.422 CC module/bdev/error/vbdev_error_rpc.o 00:13:27.422 CC module/bdev/malloc/bdev_malloc_rpc.o 00:13:27.422 LIB libspdk_blobfs_bdev.a 00:13:27.422 LIB libspdk_bdev_null.a 00:13:27.422 SO libspdk_blobfs_bdev.so.6.0 00:13:27.422 SO libspdk_bdev_null.so.6.0 00:13:27.422 CC module/bdev/nvme/bdev_nvme_rpc.o 00:13:27.422 LIB libspdk_bdev_delay.a 00:13:27.422 LIB libspdk_bdev_error.a 00:13:27.422 SYMLINK libspdk_blobfs_bdev.so 00:13:27.680 CC module/bdev/nvme/nvme_rpc.o 00:13:27.680 SO libspdk_bdev_error.so.6.0 00:13:27.680 SO libspdk_bdev_delay.so.6.0 00:13:27.680 SYMLINK libspdk_bdev_null.so 00:13:27.680 LIB libspdk_bdev_malloc.a 00:13:27.680 CC module/bdev/nvme/bdev_mdns_client.o 00:13:27.680 CC module/bdev/passthru/vbdev_passthru.o 00:13:27.680 LIB libspdk_bdev_gpt.a 00:13:27.680 SYMLINK libspdk_bdev_error.so 00:13:27.680 SO libspdk_bdev_malloc.so.6.0 00:13:27.680 LIB libspdk_bdev_lvol.a 00:13:27.680 SYMLINK libspdk_bdev_delay.so 00:13:27.680 CC module/bdev/nvme/vbdev_opal.o 00:13:27.680 SO libspdk_bdev_gpt.so.6.0 00:13:27.680 SO libspdk_bdev_lvol.so.6.0 00:13:27.680 SYMLINK libspdk_bdev_malloc.so 00:13:27.680 SYMLINK libspdk_bdev_gpt.so 00:13:27.680 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:13:27.680 SYMLINK libspdk_bdev_lvol.so 00:13:27.680 CC module/bdev/raid/bdev_raid.o 00:13:27.680 CC module/bdev/nvme/vbdev_opal_rpc.o 00:13:27.948 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:13:27.948 CC module/bdev/split/vbdev_split.o 00:13:27.948 CC module/bdev/zone_block/vbdev_zone_block.o 00:13:27.948 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:13:27.948 LIB libspdk_bdev_passthru.a 00:13:27.948 SO libspdk_bdev_passthru.so.6.0 00:13:27.948 CC module/bdev/raid/bdev_raid_rpc.o 00:13:27.948 SYMLINK libspdk_bdev_passthru.so 00:13:27.948 CC module/bdev/split/vbdev_split_rpc.o 00:13:27.948 CC module/bdev/raid/bdev_raid_sb.o 00:13:27.948 CC module/bdev/aio/bdev_aio.o 00:13:28.213 CC module/bdev/raid/raid0.o 00:13:28.213 CC module/bdev/aio/bdev_aio_rpc.o 00:13:28.213 LIB libspdk_bdev_zone_block.a 00:13:28.213 LIB libspdk_bdev_split.a 00:13:28.213 CC module/bdev/ftl/bdev_ftl.o 00:13:28.213 CC module/bdev/raid/raid1.o 00:13:28.213 SO libspdk_bdev_zone_block.so.6.0 00:13:28.213 SO libspdk_bdev_split.so.6.0 00:13:28.213 SYMLINK libspdk_bdev_split.so 00:13:28.213 SYMLINK libspdk_bdev_zone_block.so 00:13:28.213 CC module/bdev/raid/concat.o 00:13:28.213 CC module/bdev/ftl/bdev_ftl_rpc.o 00:13:28.471 LIB libspdk_bdev_aio.a 00:13:28.471 SO libspdk_bdev_aio.so.6.0 00:13:28.471 CC module/bdev/iscsi/bdev_iscsi.o 00:13:28.471 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:13:28.471 CC module/bdev/virtio/bdev_virtio_scsi.o 00:13:28.471 CC module/bdev/virtio/bdev_virtio_blk.o 00:13:28.471 SYMLINK libspdk_bdev_aio.so 00:13:28.471 CC module/bdev/virtio/bdev_virtio_rpc.o 00:13:28.471 LIB libspdk_bdev_ftl.a 00:13:28.471 SO libspdk_bdev_ftl.so.6.0 00:13:28.729 SYMLINK libspdk_bdev_ftl.so 00:13:28.729 LIB libspdk_bdev_raid.a 00:13:28.729 SO libspdk_bdev_raid.so.6.0 00:13:28.729 LIB libspdk_bdev_iscsi.a 00:13:28.729 SYMLINK libspdk_bdev_raid.so 00:13:28.729 SO libspdk_bdev_iscsi.so.6.0 00:13:28.988 LIB libspdk_bdev_virtio.a 00:13:28.988 SYMLINK libspdk_bdev_iscsi.so 00:13:28.988 SO libspdk_bdev_virtio.so.6.0 00:13:28.988 SYMLINK libspdk_bdev_virtio.so 00:13:29.247 LIB libspdk_bdev_nvme.a 00:13:29.506 SO libspdk_bdev_nvme.so.7.0 00:13:29.506 SYMLINK libspdk_bdev_nvme.so 00:13:30.073 CC module/event/subsystems/vmd/vmd_rpc.o 00:13:30.073 CC module/event/subsystems/vmd/vmd.o 00:13:30.073 CC module/event/subsystems/scheduler/scheduler.o 00:13:30.073 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:13:30.073 CC module/event/subsystems/keyring/keyring.o 00:13:30.073 CC module/event/subsystems/iobuf/iobuf.o 00:13:30.073 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:13:30.073 CC module/event/subsystems/sock/sock.o 00:13:30.331 LIB libspdk_event_scheduler.a 00:13:30.331 LIB libspdk_event_vmd.a 00:13:30.331 LIB libspdk_event_iobuf.a 00:13:30.331 LIB libspdk_event_keyring.a 00:13:30.331 LIB libspdk_event_vhost_blk.a 00:13:30.331 SO libspdk_event_vmd.so.6.0 00:13:30.331 SO libspdk_event_scheduler.so.4.0 00:13:30.331 SO libspdk_event_iobuf.so.3.0 00:13:30.331 LIB libspdk_event_sock.a 00:13:30.331 SO libspdk_event_vhost_blk.so.3.0 00:13:30.331 SO libspdk_event_keyring.so.1.0 00:13:30.331 SO libspdk_event_sock.so.5.0 00:13:30.331 SYMLINK libspdk_event_vmd.so 00:13:30.331 SYMLINK libspdk_event_scheduler.so 00:13:30.331 SYMLINK libspdk_event_iobuf.so 00:13:30.331 SYMLINK libspdk_event_keyring.so 00:13:30.331 SYMLINK libspdk_event_vhost_blk.so 00:13:30.331 SYMLINK libspdk_event_sock.so 00:13:30.590 CC module/event/subsystems/accel/accel.o 00:13:30.849 LIB libspdk_event_accel.a 00:13:30.849 SO libspdk_event_accel.so.6.0 00:13:30.849 SYMLINK libspdk_event_accel.so 00:13:31.417 CC module/event/subsystems/bdev/bdev.o 00:13:31.417 LIB libspdk_event_bdev.a 00:13:31.675 SO libspdk_event_bdev.so.6.0 00:13:31.675 SYMLINK libspdk_event_bdev.so 00:13:31.933 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:13:31.933 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:13:31.933 CC module/event/subsystems/ublk/ublk.o 00:13:31.933 CC module/event/subsystems/scsi/scsi.o 00:13:31.933 CC module/event/subsystems/nbd/nbd.o 00:13:32.190 LIB libspdk_event_ublk.a 00:13:32.190 LIB libspdk_event_scsi.a 00:13:32.190 SO libspdk_event_ublk.so.3.0 00:13:32.190 LIB libspdk_event_nbd.a 00:13:32.190 SO libspdk_event_scsi.so.6.0 00:13:32.190 LIB libspdk_event_nvmf.a 00:13:32.190 SO libspdk_event_nbd.so.6.0 00:13:32.190 SYMLINK libspdk_event_ublk.so 00:13:32.190 SYMLINK libspdk_event_scsi.so 00:13:32.190 SO libspdk_event_nvmf.so.6.0 00:13:32.190 SYMLINK libspdk_event_nbd.so 00:13:32.446 SYMLINK libspdk_event_nvmf.so 00:13:32.446 CC module/event/subsystems/iscsi/iscsi.o 00:13:32.703 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:13:32.703 LIB libspdk_event_vhost_scsi.a 00:13:32.703 LIB libspdk_event_iscsi.a 00:13:32.703 SO libspdk_event_vhost_scsi.so.3.0 00:13:32.703 SO libspdk_event_iscsi.so.6.0 00:13:32.960 SYMLINK libspdk_event_vhost_scsi.so 00:13:32.960 SYMLINK libspdk_event_iscsi.so 00:13:32.960 SO libspdk.so.6.0 00:13:33.216 SYMLINK libspdk.so 00:13:33.491 CC app/spdk_nvme_perf/perf.o 00:13:33.491 CC app/trace_record/trace_record.o 00:13:33.491 CC app/spdk_nvme_identify/identify.o 00:13:33.491 CC app/spdk_lspci/spdk_lspci.o 00:13:33.491 CXX app/trace/trace.o 00:13:33.491 CC app/iscsi_tgt/iscsi_tgt.o 00:13:33.491 CC app/nvmf_tgt/nvmf_main.o 00:13:33.491 CC app/spdk_tgt/spdk_tgt.o 00:13:33.491 CC test/thread/poller_perf/poller_perf.o 00:13:33.491 CC examples/util/zipf/zipf.o 00:13:33.491 LINK spdk_lspci 00:13:33.774 LINK nvmf_tgt 00:13:33.774 LINK spdk_trace_record 00:13:33.774 LINK poller_perf 00:13:33.774 LINK iscsi_tgt 00:13:33.774 LINK zipf 00:13:33.774 LINK spdk_tgt 00:13:33.774 LINK spdk_trace 00:13:33.774 CC app/spdk_nvme_discover/discovery_aer.o 00:13:34.032 CC app/spdk_top/spdk_top.o 00:13:34.032 CC examples/ioat/perf/perf.o 00:13:34.032 LINK spdk_nvme_discover 00:13:34.032 CC test/dma/test_dma/test_dma.o 00:13:34.032 CC app/spdk_dd/spdk_dd.o 00:13:34.032 CC app/fio/nvme/fio_plugin.o 00:13:34.032 LINK spdk_nvme_perf 00:13:34.290 LINK spdk_nvme_identify 00:13:34.290 CC test/app/bdev_svc/bdev_svc.o 00:13:34.290 LINK ioat_perf 00:13:34.290 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:13:34.290 LINK bdev_svc 00:13:34.290 LINK spdk_dd 00:13:34.290 LINK test_dma 00:13:34.549 CC examples/ioat/verify/verify.o 00:13:34.549 CC app/fio/bdev/fio_plugin.o 00:13:34.549 CC app/vhost/vhost.o 00:13:34.549 LINK verify 00:13:34.549 CC test/app/histogram_perf/histogram_perf.o 00:13:34.549 LINK vhost 00:13:34.549 CC test/app/jsoncat/jsoncat.o 00:13:34.809 LINK nvme_fuzz 00:13:34.809 LINK spdk_nvme 00:13:34.809 CC test/app/stub/stub.o 00:13:34.809 LINK spdk_top 00:13:34.809 LINK jsoncat 00:13:34.809 LINK histogram_perf 00:13:34.809 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:13:34.809 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:13:35.069 LINK stub 00:13:35.069 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:13:35.069 LINK spdk_bdev 00:13:35.069 TEST_HEADER include/spdk/accel.h 00:13:35.069 TEST_HEADER include/spdk/accel_module.h 00:13:35.069 TEST_HEADER include/spdk/assert.h 00:13:35.069 TEST_HEADER include/spdk/barrier.h 00:13:35.069 TEST_HEADER include/spdk/base64.h 00:13:35.069 TEST_HEADER include/spdk/bdev.h 00:13:35.069 TEST_HEADER include/spdk/bdev_module.h 00:13:35.069 TEST_HEADER include/spdk/bdev_zone.h 00:13:35.069 TEST_HEADER include/spdk/bit_array.h 00:13:35.069 TEST_HEADER include/spdk/bit_pool.h 00:13:35.069 TEST_HEADER include/spdk/blob_bdev.h 00:13:35.069 TEST_HEADER include/spdk/blobfs_bdev.h 00:13:35.069 TEST_HEADER include/spdk/blobfs.h 00:13:35.069 TEST_HEADER include/spdk/blob.h 00:13:35.069 TEST_HEADER include/spdk/conf.h 00:13:35.069 TEST_HEADER include/spdk/config.h 00:13:35.069 TEST_HEADER include/spdk/cpuset.h 00:13:35.069 TEST_HEADER include/spdk/crc16.h 00:13:35.069 TEST_HEADER include/spdk/crc32.h 00:13:35.069 TEST_HEADER include/spdk/crc64.h 00:13:35.069 TEST_HEADER include/spdk/dif.h 00:13:35.069 TEST_HEADER include/spdk/dma.h 00:13:35.069 TEST_HEADER include/spdk/endian.h 00:13:35.069 TEST_HEADER include/spdk/env_dpdk.h 00:13:35.069 TEST_HEADER include/spdk/env.h 00:13:35.069 TEST_HEADER include/spdk/event.h 00:13:35.069 TEST_HEADER include/spdk/fd_group.h 00:13:35.069 TEST_HEADER include/spdk/fd.h 00:13:35.069 TEST_HEADER include/spdk/file.h 00:13:35.069 TEST_HEADER include/spdk/ftl.h 00:13:35.069 TEST_HEADER include/spdk/gpt_spec.h 00:13:35.069 TEST_HEADER include/spdk/hexlify.h 00:13:35.069 TEST_HEADER include/spdk/histogram_data.h 00:13:35.069 TEST_HEADER include/spdk/idxd.h 00:13:35.069 TEST_HEADER include/spdk/idxd_spec.h 00:13:35.069 CC examples/interrupt_tgt/interrupt_tgt.o 00:13:35.069 TEST_HEADER include/spdk/init.h 00:13:35.069 TEST_HEADER include/spdk/ioat.h 00:13:35.069 TEST_HEADER include/spdk/ioat_spec.h 00:13:35.069 TEST_HEADER include/spdk/iscsi_spec.h 00:13:35.069 TEST_HEADER include/spdk/json.h 00:13:35.069 TEST_HEADER include/spdk/jsonrpc.h 00:13:35.069 TEST_HEADER include/spdk/keyring.h 00:13:35.069 TEST_HEADER include/spdk/keyring_module.h 00:13:35.069 TEST_HEADER include/spdk/likely.h 00:13:35.069 TEST_HEADER include/spdk/log.h 00:13:35.069 TEST_HEADER include/spdk/lvol.h 00:13:35.069 TEST_HEADER include/spdk/memory.h 00:13:35.069 TEST_HEADER include/spdk/mmio.h 00:13:35.069 TEST_HEADER include/spdk/nbd.h 00:13:35.069 TEST_HEADER include/spdk/notify.h 00:13:35.333 TEST_HEADER include/spdk/nvme.h 00:13:35.333 TEST_HEADER include/spdk/nvme_intel.h 00:13:35.333 TEST_HEADER include/spdk/nvme_ocssd.h 00:13:35.333 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:13:35.333 TEST_HEADER include/spdk/nvme_spec.h 00:13:35.333 TEST_HEADER include/spdk/nvme_zns.h 00:13:35.333 TEST_HEADER include/spdk/nvmf_cmd.h 00:13:35.333 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:13:35.333 TEST_HEADER include/spdk/nvmf.h 00:13:35.333 TEST_HEADER include/spdk/nvmf_spec.h 00:13:35.333 TEST_HEADER include/spdk/nvmf_transport.h 00:13:35.333 TEST_HEADER include/spdk/opal.h 00:13:35.333 TEST_HEADER include/spdk/opal_spec.h 00:13:35.333 TEST_HEADER include/spdk/pci_ids.h 00:13:35.333 TEST_HEADER include/spdk/pipe.h 00:13:35.333 TEST_HEADER include/spdk/queue.h 00:13:35.333 TEST_HEADER include/spdk/reduce.h 00:13:35.333 TEST_HEADER include/spdk/rpc.h 00:13:35.333 TEST_HEADER include/spdk/scheduler.h 00:13:35.333 TEST_HEADER include/spdk/scsi.h 00:13:35.333 TEST_HEADER include/spdk/scsi_spec.h 00:13:35.333 TEST_HEADER include/spdk/sock.h 00:13:35.333 TEST_HEADER include/spdk/stdinc.h 00:13:35.333 TEST_HEADER include/spdk/string.h 00:13:35.333 TEST_HEADER include/spdk/thread.h 00:13:35.333 TEST_HEADER include/spdk/trace.h 00:13:35.333 TEST_HEADER include/spdk/trace_parser.h 00:13:35.333 TEST_HEADER include/spdk/tree.h 00:13:35.333 TEST_HEADER include/spdk/ublk.h 00:13:35.333 TEST_HEADER include/spdk/util.h 00:13:35.333 TEST_HEADER include/spdk/uuid.h 00:13:35.333 TEST_HEADER include/spdk/version.h 00:13:35.333 TEST_HEADER include/spdk/vfio_user_pci.h 00:13:35.333 CC examples/sock/hello_world/hello_sock.o 00:13:35.333 TEST_HEADER include/spdk/vfio_user_spec.h 00:13:35.333 TEST_HEADER include/spdk/vhost.h 00:13:35.333 TEST_HEADER include/spdk/vmd.h 00:13:35.333 TEST_HEADER include/spdk/xor.h 00:13:35.333 TEST_HEADER include/spdk/zipf.h 00:13:35.333 CXX test/cpp_headers/accel.o 00:13:35.333 CC examples/thread/thread/thread_ex.o 00:13:35.333 LINK interrupt_tgt 00:13:35.333 CC test/event/event_perf/event_perf.o 00:13:35.333 CC examples/vmd/lsvmd/lsvmd.o 00:13:35.333 CC test/env/mem_callbacks/mem_callbacks.o 00:13:35.333 CXX test/cpp_headers/accel_module.o 00:13:35.597 LINK vhost_fuzz 00:13:35.598 LINK hello_sock 00:13:35.598 LINK lsvmd 00:13:35.598 LINK event_perf 00:13:35.598 LINK thread 00:13:35.598 CXX test/cpp_headers/assert.o 00:13:35.598 CXX test/cpp_headers/barrier.o 00:13:35.598 CC test/event/reactor/reactor.o 00:13:35.598 CXX test/cpp_headers/base64.o 00:13:35.864 CC examples/vmd/led/led.o 00:13:35.864 LINK reactor 00:13:35.864 CXX test/cpp_headers/bdev.o 00:13:35.864 CC test/event/reactor_perf/reactor_perf.o 00:13:35.864 CC examples/idxd/perf/perf.o 00:13:35.864 CC test/event/app_repeat/app_repeat.o 00:13:35.864 LINK led 00:13:35.864 CC test/event/scheduler/scheduler.o 00:13:36.129 LINK mem_callbacks 00:13:36.129 LINK reactor_perf 00:13:36.129 CXX test/cpp_headers/bdev_module.o 00:13:36.129 LINK app_repeat 00:13:36.129 CC test/env/vtophys/vtophys.o 00:13:36.129 LINK scheduler 00:13:36.129 LINK idxd_perf 00:13:36.420 CXX test/cpp_headers/bdev_zone.o 00:13:36.420 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:13:36.420 LINK vtophys 00:13:36.420 CC examples/nvme/hello_world/hello_world.o 00:13:36.420 CC test/env/memory/memory_ut.o 00:13:36.420 CC test/env/pci/pci_ut.o 00:13:36.420 CXX test/cpp_headers/bit_array.o 00:13:36.420 CXX test/cpp_headers/bit_pool.o 00:13:36.420 LINK env_dpdk_post_init 00:13:36.420 CXX test/cpp_headers/blob_bdev.o 00:13:36.712 CXX test/cpp_headers/blobfs_bdev.o 00:13:36.712 LINK hello_world 00:13:36.712 LINK iscsi_fuzz 00:13:36.712 CXX test/cpp_headers/blobfs.o 00:13:36.712 LINK pci_ut 00:13:36.969 CC examples/nvme/nvme_manage/nvme_manage.o 00:13:36.969 CC examples/nvme/reconnect/reconnect.o 00:13:36.969 CC test/nvme/aer/aer.o 00:13:36.969 CC test/nvme/reset/reset.o 00:13:36.969 CC test/nvme/sgl/sgl.o 00:13:36.969 CXX test/cpp_headers/blob.o 00:13:36.969 CXX test/cpp_headers/conf.o 00:13:37.228 CXX test/cpp_headers/config.o 00:13:37.228 LINK aer 00:13:37.228 LINK reset 00:13:37.228 CXX test/cpp_headers/cpuset.o 00:13:37.228 CC test/nvme/e2edp/nvme_dp.o 00:13:37.228 LINK reconnect 00:13:37.228 LINK sgl 00:13:37.228 CC test/nvme/overhead/overhead.o 00:13:37.228 CXX test/cpp_headers/crc16.o 00:13:37.228 CXX test/cpp_headers/crc32.o 00:13:37.228 LINK nvme_manage 00:13:37.487 LINK nvme_dp 00:13:37.487 CC test/nvme/err_injection/err_injection.o 00:13:37.487 CXX test/cpp_headers/crc64.o 00:13:37.487 CC test/nvme/startup/startup.o 00:13:37.487 CC test/nvme/reserve/reserve.o 00:13:37.487 LINK memory_ut 00:13:37.487 LINK overhead 00:13:37.746 CC test/nvme/simple_copy/simple_copy.o 00:13:37.746 CC examples/nvme/arbitration/arbitration.o 00:13:37.746 LINK err_injection 00:13:37.746 LINK startup 00:13:37.746 CXX test/cpp_headers/dif.o 00:13:38.006 CC test/nvme/connect_stress/connect_stress.o 00:13:38.006 LINK reserve 00:13:38.006 CXX test/cpp_headers/dma.o 00:13:38.006 CC test/nvme/boot_partition/boot_partition.o 00:13:38.006 LINK simple_copy 00:13:38.006 CC test/nvme/compliance/nvme_compliance.o 00:13:38.264 CXX test/cpp_headers/endian.o 00:13:38.264 CC test/nvme/fused_ordering/fused_ordering.o 00:13:38.264 CXX test/cpp_headers/env_dpdk.o 00:13:38.264 LINK connect_stress 00:13:38.264 LINK arbitration 00:13:38.264 LINK boot_partition 00:13:38.264 CC examples/accel/perf/accel_perf.o 00:13:38.522 CC test/nvme/doorbell_aers/doorbell_aers.o 00:13:38.522 CXX test/cpp_headers/env.o 00:13:38.522 CC test/rpc_client/rpc_client_test.o 00:13:38.522 LINK fused_ordering 00:13:38.522 LINK nvme_compliance 00:13:38.522 CC test/nvme/fdp/fdp.o 00:13:38.522 CC examples/nvme/hotplug/hotplug.o 00:13:38.779 CXX test/cpp_headers/event.o 00:13:38.779 CC test/nvme/cuse/cuse.o 00:13:38.779 LINK doorbell_aers 00:13:38.779 CXX test/cpp_headers/fd_group.o 00:13:38.779 LINK rpc_client_test 00:13:38.779 LINK accel_perf 00:13:39.036 CXX test/cpp_headers/fd.o 00:13:39.036 LINK hotplug 00:13:39.036 LINK fdp 00:13:39.036 CC examples/nvme/cmb_copy/cmb_copy.o 00:13:39.036 CC test/accel/dif/dif.o 00:13:39.036 CXX test/cpp_headers/file.o 00:13:39.036 CXX test/cpp_headers/ftl.o 00:13:39.036 CXX test/cpp_headers/gpt_spec.o 00:13:39.036 CC test/blobfs/mkfs/mkfs.o 00:13:39.036 CC examples/blob/hello_world/hello_blob.o 00:13:39.296 CC examples/blob/cli/blobcli.o 00:13:39.296 LINK cmb_copy 00:13:39.296 CXX test/cpp_headers/hexlify.o 00:13:39.296 LINK mkfs 00:13:39.296 LINK hello_blob 00:13:39.591 CXX test/cpp_headers/histogram_data.o 00:13:39.591 CC examples/nvme/abort/abort.o 00:13:39.591 CC examples/bdev/hello_world/hello_bdev.o 00:13:39.591 LINK dif 00:13:39.591 CXX test/cpp_headers/idxd.o 00:13:39.591 CC test/lvol/esnap/esnap.o 00:13:39.591 CXX test/cpp_headers/idxd_spec.o 00:13:39.591 LINK blobcli 00:13:39.591 CC examples/bdev/bdevperf/bdevperf.o 00:13:39.849 LINK hello_bdev 00:13:39.849 CXX test/cpp_headers/init.o 00:13:39.849 LINK abort 00:13:39.849 LINK cuse 00:13:39.849 CXX test/cpp_headers/ioat.o 00:13:39.849 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:13:39.849 CXX test/cpp_headers/ioat_spec.o 00:13:39.849 CXX test/cpp_headers/iscsi_spec.o 00:13:39.849 CC test/bdev/bdevio/bdevio.o 00:13:40.106 CXX test/cpp_headers/json.o 00:13:40.106 CXX test/cpp_headers/jsonrpc.o 00:13:40.106 LINK pmr_persistence 00:13:40.106 CXX test/cpp_headers/keyring.o 00:13:40.106 CXX test/cpp_headers/keyring_module.o 00:13:40.106 CXX test/cpp_headers/likely.o 00:13:40.106 CXX test/cpp_headers/log.o 00:13:40.364 CXX test/cpp_headers/lvol.o 00:13:40.364 CXX test/cpp_headers/memory.o 00:13:40.364 CXX test/cpp_headers/mmio.o 00:13:40.364 CXX test/cpp_headers/nbd.o 00:13:40.364 CXX test/cpp_headers/notify.o 00:13:40.364 CXX test/cpp_headers/nvme.o 00:13:40.364 LINK bdevio 00:13:40.364 CXX test/cpp_headers/nvme_intel.o 00:13:40.364 LINK bdevperf 00:13:40.364 CXX test/cpp_headers/nvme_ocssd.o 00:13:40.621 CXX test/cpp_headers/nvme_ocssd_spec.o 00:13:40.621 CXX test/cpp_headers/nvme_spec.o 00:13:40.621 CXX test/cpp_headers/nvme_zns.o 00:13:40.621 CXX test/cpp_headers/nvmf_cmd.o 00:13:40.621 CXX test/cpp_headers/nvmf_fc_spec.o 00:13:40.621 CXX test/cpp_headers/nvmf.o 00:13:40.621 CXX test/cpp_headers/nvmf_spec.o 00:13:40.621 CXX test/cpp_headers/nvmf_transport.o 00:13:40.621 CXX test/cpp_headers/opal.o 00:13:40.621 CXX test/cpp_headers/opal_spec.o 00:13:40.621 CXX test/cpp_headers/pci_ids.o 00:13:40.621 CXX test/cpp_headers/pipe.o 00:13:40.879 CXX test/cpp_headers/queue.o 00:13:40.879 CXX test/cpp_headers/reduce.o 00:13:40.879 CXX test/cpp_headers/rpc.o 00:13:40.879 CXX test/cpp_headers/scheduler.o 00:13:40.879 CC examples/nvmf/nvmf/nvmf.o 00:13:40.879 CXX test/cpp_headers/scsi.o 00:13:40.879 CXX test/cpp_headers/scsi_spec.o 00:13:40.879 CXX test/cpp_headers/sock.o 00:13:40.879 CXX test/cpp_headers/stdinc.o 00:13:40.879 CXX test/cpp_headers/string.o 00:13:40.879 CXX test/cpp_headers/thread.o 00:13:40.879 CXX test/cpp_headers/trace.o 00:13:40.879 CXX test/cpp_headers/trace_parser.o 00:13:41.138 CXX test/cpp_headers/tree.o 00:13:41.138 CXX test/cpp_headers/ublk.o 00:13:41.138 CXX test/cpp_headers/util.o 00:13:41.138 CXX test/cpp_headers/uuid.o 00:13:41.138 CXX test/cpp_headers/version.o 00:13:41.138 CXX test/cpp_headers/vfio_user_pci.o 00:13:41.138 CXX test/cpp_headers/vfio_user_spec.o 00:13:41.138 CXX test/cpp_headers/vhost.o 00:13:41.138 CXX test/cpp_headers/vmd.o 00:13:41.138 CXX test/cpp_headers/xor.o 00:13:41.138 CXX test/cpp_headers/zipf.o 00:13:41.138 LINK nvmf 00:13:44.422 LINK esnap 00:13:44.422 00:13:44.422 real 1m4.357s 00:13:44.422 user 6m13.810s 00:13:44.422 sys 1m31.925s 00:13:44.422 09:54:57 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:13:44.422 09:54:57 make -- common/autotest_common.sh@10 -- $ set +x 00:13:44.422 ************************************ 00:13:44.422 END TEST make 00:13:44.422 ************************************ 00:13:44.422 09:54:57 -- common/autotest_common.sh@1142 -- $ return 0 00:13:44.422 09:54:57 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:13:44.422 09:54:57 -- pm/common@29 -- $ signal_monitor_resources TERM 00:13:44.422 09:54:57 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:13:44.422 09:54:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:13:44.422 09:54:57 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:13:44.422 09:54:57 -- pm/common@44 -- $ pid=5357 00:13:44.422 09:54:57 -- pm/common@50 -- $ kill -TERM 5357 00:13:44.422 09:54:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:13:44.422 09:54:57 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:13:44.422 09:54:57 -- pm/common@44 -- $ pid=5358 00:13:44.422 09:54:57 -- pm/common@50 -- $ kill -TERM 5358 00:13:44.422 09:54:57 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:44.422 09:54:57 -- nvmf/common.sh@7 -- # uname -s 00:13:44.422 09:54:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:44.422 09:54:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:44.422 09:54:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:44.423 09:54:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:44.423 09:54:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:44.423 09:54:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:44.423 09:54:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:44.423 09:54:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:44.423 09:54:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:44.423 09:54:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:44.680 09:54:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:13:44.680 09:54:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:13:44.680 09:54:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:44.680 09:54:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:44.680 09:54:58 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:44.680 09:54:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:44.680 09:54:58 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:44.680 09:54:58 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:44.680 09:54:58 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:44.680 09:54:58 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:44.680 09:54:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.680 09:54:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.680 09:54:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.680 09:54:58 -- paths/export.sh@5 -- # export PATH 00:13:44.680 09:54:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.680 09:54:58 -- nvmf/common.sh@47 -- # : 0 00:13:44.680 09:54:58 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:44.680 09:54:58 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:44.680 09:54:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:44.680 09:54:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:44.680 09:54:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:44.680 09:54:58 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:44.680 09:54:58 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:44.680 09:54:58 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:44.680 09:54:58 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:13:44.680 09:54:58 -- spdk/autotest.sh@32 -- # uname -s 00:13:44.680 09:54:58 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:13:44.680 09:54:58 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:13:44.680 09:54:58 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:13:44.680 09:54:58 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:13:44.680 09:54:58 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:13:44.680 09:54:58 -- spdk/autotest.sh@44 -- # modprobe nbd 00:13:44.680 09:54:58 -- spdk/autotest.sh@46 -- # type -P udevadm 00:13:44.680 09:54:58 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:13:44.680 09:54:58 -- spdk/autotest.sh@48 -- # udevadm_pid=54706 00:13:44.680 09:54:58 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:13:44.680 09:54:58 -- pm/common@17 -- # local monitor 00:13:44.680 09:54:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:13:44.680 09:54:58 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:13:44.680 09:54:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:13:44.680 09:54:58 -- pm/common@25 -- # sleep 1 00:13:44.680 09:54:58 -- pm/common@21 -- # date +%s 00:13:44.680 09:54:58 -- pm/common@21 -- # date +%s 00:13:44.939 09:54:58 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721037298 00:13:44.939 09:54:58 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721037298 00:13:44.939 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721037298_collect-vmstat.pm.log 00:13:44.939 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721037298_collect-cpu-load.pm.log 00:13:45.874 09:54:59 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:13:45.874 09:54:59 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:13:45.874 09:54:59 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:45.874 09:54:59 -- common/autotest_common.sh@10 -- # set +x 00:13:45.874 09:54:59 -- spdk/autotest.sh@59 -- # create_test_list 00:13:45.874 09:54:59 -- common/autotest_common.sh@746 -- # xtrace_disable 00:13:45.874 09:54:59 -- common/autotest_common.sh@10 -- # set +x 00:13:45.874 09:54:59 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:13:45.874 09:54:59 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:13:45.874 09:54:59 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:13:45.874 09:54:59 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:13:45.874 09:54:59 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:13:45.874 09:54:59 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:13:45.874 09:54:59 -- common/autotest_common.sh@1455 -- # uname 00:13:45.874 09:54:59 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:13:45.874 09:54:59 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:13:45.874 09:54:59 -- common/autotest_common.sh@1475 -- # uname 00:13:45.874 09:54:59 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:13:45.874 09:54:59 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:13:45.874 09:54:59 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:13:45.874 09:54:59 -- spdk/autotest.sh@72 -- # hash lcov 00:13:45.874 09:54:59 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:13:45.874 09:54:59 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:13:45.874 --rc lcov_branch_coverage=1 00:13:45.874 --rc lcov_function_coverage=1 00:13:45.874 --rc genhtml_branch_coverage=1 00:13:45.874 --rc genhtml_function_coverage=1 00:13:45.874 --rc genhtml_legend=1 00:13:45.874 --rc geninfo_all_blocks=1 00:13:45.874 ' 00:13:45.874 09:54:59 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:13:45.874 --rc lcov_branch_coverage=1 00:13:45.874 --rc lcov_function_coverage=1 00:13:45.874 --rc genhtml_branch_coverage=1 00:13:45.874 --rc genhtml_function_coverage=1 00:13:45.874 --rc genhtml_legend=1 00:13:45.874 --rc geninfo_all_blocks=1 00:13:45.874 ' 00:13:45.874 09:54:59 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:13:45.874 --rc lcov_branch_coverage=1 00:13:45.874 --rc lcov_function_coverage=1 00:13:45.874 --rc genhtml_branch_coverage=1 00:13:45.874 --rc genhtml_function_coverage=1 00:13:45.874 --rc genhtml_legend=1 00:13:45.874 --rc geninfo_all_blocks=1 00:13:45.874 --no-external' 00:13:45.874 09:54:59 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:13:45.874 --rc lcov_branch_coverage=1 00:13:45.874 --rc lcov_function_coverage=1 00:13:45.874 --rc genhtml_branch_coverage=1 00:13:45.874 --rc genhtml_function_coverage=1 00:13:45.874 --rc genhtml_legend=1 00:13:45.874 --rc geninfo_all_blocks=1 00:13:45.874 --no-external' 00:13:45.874 09:54:59 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:13:45.874 lcov: LCOV version 1.14 00:13:45.874 09:54:59 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:14:00.751 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:14:00.751 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:14:12.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:14:12.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:14:12.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:14:12.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:14:12.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:14:12.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:14:12.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:14:12.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:14:12.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:14:12.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:14:12.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:14:12.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:14:12.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:14:12.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:14:12.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:14:12.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:14:12.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:14:12.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:14:12.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:14:12.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:14:12.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:14:12.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:14:12.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:14:12.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:14:12.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:14:12.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:14:12.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:14:12.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:14:12.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:14:12.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:14:12.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:14:12.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:14:12.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:14:12.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:14:12.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:14:12.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:14:12.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:14:12.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:14:12.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:14:12.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:14:12.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:14:12.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:14:12.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:14:12.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:14:12.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:14:12.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:14:12.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:14:12.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:14:12.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:14:12.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:14:12.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:14:12.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:14:12.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:14:12.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:14:12.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:14:12.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:14:12.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:14:12.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:14:12.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:14:12.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:14:12.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:14:12.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:14:12.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:14:12.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:14:12.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:14:12.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:14:12.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:14:12.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:14:12.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:14:12.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:14:12.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:14:12.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:14:12.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:14:12.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:14:12.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:14:12.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:14:12.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:14:12.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:14:12.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:14:12.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:14:12.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:14:12.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:14:12.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:14:12.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:14:12.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:14:12.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:14:12.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:14:12.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:14:12.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:14:12.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:14:12.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:14:12.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:14:12.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:14:12.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:14:12.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:14:12.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:14:12.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:14:12.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:14:12.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:14:12.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:14:12.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:14:12.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:14:12.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:14:12.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:14:12.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:14:12.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:14:12.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:14:12.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:14:12.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:14:12.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:14:12.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:14:12.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:14:12.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:14:12.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:14:12.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:14:12.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:14:12.976 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:14:12.976 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:14:12.977 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:14:12.977 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:14:12.977 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:14:12.977 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:14:12.977 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:14:12.977 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:14:12.977 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:14:12.977 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:14:12.977 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:14:12.977 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:14:12.977 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:14:12.977 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:14:12.977 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:14:12.977 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:14:12.977 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:14:12.977 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:14:12.977 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:14:12.977 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:14:12.977 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:14:12.977 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:14:12.977 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:14:12.977 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:14:12.977 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:14:12.977 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:14:12.977 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:14:12.977 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:14:12.977 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:14:12.977 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:14:12.977 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:14:12.977 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:14:12.977 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:14:12.977 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:14:12.977 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:14:12.977 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:14:12.977 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:14:12.977 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:14:12.977 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:14:12.977 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:14:12.977 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:14:12.977 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:14:12.977 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:14:12.977 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:14:12.977 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:14:12.977 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:14:12.977 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:14:12.977 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:14:12.977 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:14:12.977 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:14:12.977 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:14:12.977 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:14:12.977 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:14:12.977 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:14:12.977 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:14:12.977 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:14:12.977 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:14:12.977 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:14:12.977 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:14:12.977 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:14:16.258 09:55:29 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:14:16.258 09:55:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:16.258 09:55:29 -- common/autotest_common.sh@10 -- # set +x 00:14:16.258 09:55:29 -- spdk/autotest.sh@91 -- # rm -f 00:14:16.258 09:55:29 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:17.190 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:17.190 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:14:17.190 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:14:17.190 09:55:30 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:14:17.190 09:55:30 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:14:17.190 09:55:30 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:14:17.190 09:55:30 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:14:17.190 09:55:30 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:17.190 09:55:30 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:14:17.190 09:55:30 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:14:17.190 09:55:30 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:14:17.190 09:55:30 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:17.190 09:55:30 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:17.190 09:55:30 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:14:17.190 09:55:30 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:14:17.190 09:55:30 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:14:17.190 09:55:30 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:17.190 09:55:30 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:17.190 09:55:30 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:14:17.190 09:55:30 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:14:17.190 09:55:30 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:14:17.190 09:55:30 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:17.190 09:55:30 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:17.190 09:55:30 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:14:17.190 09:55:30 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:14:17.190 09:55:30 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:14:17.190 09:55:30 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:17.190 09:55:30 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:14:17.190 09:55:30 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:14:17.190 09:55:30 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:14:17.190 09:55:30 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:14:17.190 09:55:30 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:14:17.190 09:55:30 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:14:17.190 No valid GPT data, bailing 00:14:17.190 09:55:30 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:14:17.190 09:55:30 -- scripts/common.sh@391 -- # pt= 00:14:17.190 09:55:30 -- scripts/common.sh@392 -- # return 1 00:14:17.190 09:55:30 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:14:17.190 1+0 records in 00:14:17.190 1+0 records out 00:14:17.190 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00468984 s, 224 MB/s 00:14:17.190 09:55:30 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:14:17.190 09:55:30 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:14:17.190 09:55:30 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:14:17.190 09:55:30 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:14:17.190 09:55:30 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:14:17.190 No valid GPT data, bailing 00:14:17.190 09:55:30 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:14:17.190 09:55:30 -- scripts/common.sh@391 -- # pt= 00:14:17.190 09:55:30 -- scripts/common.sh@392 -- # return 1 00:14:17.191 09:55:30 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:14:17.448 1+0 records in 00:14:17.448 1+0 records out 00:14:17.448 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00557904 s, 188 MB/s 00:14:17.448 09:55:30 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:14:17.448 09:55:30 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:14:17.448 09:55:30 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:14:17.448 09:55:30 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:14:17.448 09:55:30 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:14:17.448 No valid GPT data, bailing 00:14:17.448 09:55:30 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:14:17.448 09:55:30 -- scripts/common.sh@391 -- # pt= 00:14:17.448 09:55:30 -- scripts/common.sh@392 -- # return 1 00:14:17.448 09:55:30 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:14:17.448 1+0 records in 00:14:17.448 1+0 records out 00:14:17.448 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00602757 s, 174 MB/s 00:14:17.448 09:55:30 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:14:17.448 09:55:30 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:14:17.448 09:55:30 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:14:17.448 09:55:30 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:14:17.448 09:55:30 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:14:17.448 No valid GPT data, bailing 00:14:17.448 09:55:30 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:14:17.448 09:55:30 -- scripts/common.sh@391 -- # pt= 00:14:17.448 09:55:30 -- scripts/common.sh@392 -- # return 1 00:14:17.448 09:55:30 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:14:17.448 1+0 records in 00:14:17.448 1+0 records out 00:14:17.448 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00442058 s, 237 MB/s 00:14:17.448 09:55:30 -- spdk/autotest.sh@118 -- # sync 00:14:17.448 09:55:31 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:14:17.448 09:55:31 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:14:17.448 09:55:31 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:14:19.984 09:55:33 -- spdk/autotest.sh@124 -- # uname -s 00:14:19.984 09:55:33 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:14:19.984 09:55:33 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:14:19.984 09:55:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:19.984 09:55:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:19.984 09:55:33 -- common/autotest_common.sh@10 -- # set +x 00:14:19.984 ************************************ 00:14:19.984 START TEST setup.sh 00:14:19.984 ************************************ 00:14:19.984 09:55:33 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:14:19.984 * Looking for test storage... 00:14:19.984 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:14:19.984 09:55:33 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:14:19.984 09:55:33 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:14:19.984 09:55:33 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:14:19.984 09:55:33 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:19.984 09:55:33 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:19.984 09:55:33 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:14:19.984 ************************************ 00:14:19.984 START TEST acl 00:14:19.984 ************************************ 00:14:19.984 09:55:33 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:14:20.242 * Looking for test storage... 00:14:20.242 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:14:20.242 09:55:33 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:14:20.242 09:55:33 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:14:20.242 09:55:33 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:14:20.242 09:55:33 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:14:20.242 09:55:33 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:20.242 09:55:33 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:14:20.242 09:55:33 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:14:20.242 09:55:33 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:14:20.242 09:55:33 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:20.242 09:55:33 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:20.242 09:55:33 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:14:20.242 09:55:33 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:14:20.242 09:55:33 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:14:20.242 09:55:33 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:20.242 09:55:33 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:20.242 09:55:33 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:14:20.242 09:55:33 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:14:20.242 09:55:33 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:14:20.242 09:55:33 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:20.242 09:55:33 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:20.242 09:55:33 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:14:20.242 09:55:33 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:14:20.242 09:55:33 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:14:20.242 09:55:33 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:20.242 09:55:33 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:14:20.242 09:55:33 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:14:20.242 09:55:33 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:14:20.242 09:55:33 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:14:20.242 09:55:33 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:14:20.243 09:55:33 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:14:20.243 09:55:33 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:21.176 09:55:34 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:14:21.176 09:55:34 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:14:21.176 09:55:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:14:21.176 09:55:34 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:14:21.176 09:55:34 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:14:21.176 09:55:34 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:14:21.741 09:55:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:14:21.741 09:55:35 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:14:21.741 09:55:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:14:21.741 Hugepages 00:14:21.741 node hugesize free / total 00:14:21.741 09:55:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:14:21.741 09:55:35 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:14:21.741 09:55:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:14:21.741 00:14:21.741 Type BDF Vendor Device NUMA Driver Device Block devices 00:14:21.741 09:55:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:14:21.741 09:55:35 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:14:21.741 09:55:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:14:22.001 09:55:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:14:22.001 09:55:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:14:22.001 09:55:35 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:14:22.001 09:55:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:14:22.001 09:55:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:14:22.001 09:55:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:14:22.001 09:55:35 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:14:22.001 09:55:35 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:14:22.001 09:55:35 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:14:22.001 09:55:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:14:22.001 09:55:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:14:22.001 09:55:35 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:14:22.001 09:55:35 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:14:22.001 09:55:35 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:14:22.001 09:55:35 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:14:22.001 09:55:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:14:22.001 09:55:35 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:14:22.259 09:55:35 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:14:22.259 09:55:35 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:22.259 09:55:35 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:22.259 09:55:35 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:14:22.259 ************************************ 00:14:22.259 START TEST denied 00:14:22.259 ************************************ 00:14:22.259 09:55:35 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:14:22.259 09:55:35 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:14:22.259 09:55:35 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:14:22.259 09:55:35 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:14:22.259 09:55:35 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:14:22.259 09:55:35 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:14:23.196 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:14:23.196 09:55:36 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:14:23.196 09:55:36 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:14:23.196 09:55:36 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:14:23.196 09:55:36 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:14:23.196 09:55:36 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:14:23.196 09:55:36 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:14:23.196 09:55:36 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:14:23.196 09:55:36 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:14:23.196 09:55:36 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:14:23.196 09:55:36 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:23.764 00:14:23.764 real 0m1.652s 00:14:23.764 user 0m0.597s 00:14:23.764 sys 0m1.025s 00:14:23.764 09:55:37 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:23.764 09:55:37 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:14:23.764 ************************************ 00:14:23.764 END TEST denied 00:14:23.764 ************************************ 00:14:23.764 09:55:37 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:14:23.764 09:55:37 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:14:23.764 09:55:37 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:23.764 09:55:37 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:23.764 09:55:37 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:14:23.764 ************************************ 00:14:23.764 START TEST allowed 00:14:23.764 ************************************ 00:14:23.764 09:55:37 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:14:23.764 09:55:37 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:14:23.764 09:55:37 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:14:23.764 09:55:37 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:14:23.764 09:55:37 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:14:23.764 09:55:37 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:14:24.701 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:24.701 09:55:38 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:14:24.701 09:55:38 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:14:24.701 09:55:38 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:14:24.701 09:55:38 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:14:24.701 09:55:38 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:14:24.701 09:55:38 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:14:24.701 09:55:38 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:14:24.701 09:55:38 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:14:24.701 09:55:38 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:14:24.701 09:55:38 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:25.640 00:14:25.640 real 0m1.794s 00:14:25.640 user 0m0.728s 00:14:25.640 sys 0m1.080s 00:14:25.640 09:55:39 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:25.640 09:55:39 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:14:25.640 ************************************ 00:14:25.640 END TEST allowed 00:14:25.640 ************************************ 00:14:25.640 09:55:39 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:14:25.640 00:14:25.640 real 0m5.608s 00:14:25.640 user 0m2.230s 00:14:25.640 sys 0m3.383s 00:14:25.640 09:55:39 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:25.640 09:55:39 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:14:25.640 ************************************ 00:14:25.640 END TEST acl 00:14:25.640 ************************************ 00:14:25.640 09:55:39 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:14:25.640 09:55:39 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:14:25.640 09:55:39 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:25.640 09:55:39 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:25.640 09:55:39 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:14:25.640 ************************************ 00:14:25.640 START TEST hugepages 00:14:25.640 ************************************ 00:14:25.640 09:55:39 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:14:25.900 * Looking for test storage... 00:14:25.900 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 5887192 kB' 'MemAvailable: 7411864 kB' 'Buffers: 2436 kB' 'Cached: 1736376 kB' 'SwapCached: 0 kB' 'Active: 479032 kB' 'Inactive: 1366140 kB' 'Active(anon): 116848 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1366140 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 108096 kB' 'Mapped: 48796 kB' 'Shmem: 10488 kB' 'KReclaimable: 67024 kB' 'Slab: 142684 kB' 'SReclaimable: 67024 kB' 'SUnreclaim: 75660 kB' 'KernelStack: 6344 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 339044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54984 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 4018176 kB' 'DirectMap1G: 10485760 kB' 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:25.900 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:25.901 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:25.902 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:25.902 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:25.902 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:14:25.902 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:14:25.902 09:55:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:14:25.902 09:55:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:25.902 09:55:39 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:14:25.902 09:55:39 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:14:25.902 09:55:39 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:14:25.902 09:55:39 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:14:25.902 09:55:39 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:14:25.902 09:55:39 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:14:25.902 09:55:39 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:14:25.902 09:55:39 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:14:25.902 09:55:39 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:14:25.902 09:55:39 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:14:25.902 09:55:39 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:14:25.902 09:55:39 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:14:25.902 09:55:39 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:14:25.902 09:55:39 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:14:25.902 09:55:39 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:14:25.902 09:55:39 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:14:25.902 09:55:39 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:14:25.902 09:55:39 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:14:25.902 09:55:39 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:14:25.902 09:55:39 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:14:25.902 09:55:39 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:14:25.902 09:55:39 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:14:25.902 09:55:39 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:14:25.902 09:55:39 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:14:25.902 09:55:39 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:14:25.902 09:55:39 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:25.902 09:55:39 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:25.902 09:55:39 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:14:25.902 ************************************ 00:14:25.902 START TEST default_setup 00:14:25.902 ************************************ 00:14:25.902 09:55:39 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:14:25.902 09:55:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:14:25.902 09:55:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:14:25.902 09:55:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:14:25.902 09:55:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:14:25.902 09:55:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:14:25.902 09:55:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:14:25.902 09:55:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:14:25.902 09:55:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:14:25.902 09:55:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:14:25.902 09:55:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:14:25.902 09:55:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:14:25.902 09:55:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:14:25.902 09:55:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:14:25.902 09:55:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:14:25.902 09:55:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:14:25.902 09:55:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:14:25.902 09:55:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:14:25.902 09:55:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:14:25.902 09:55:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:14:25.902 09:55:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:14:25.902 09:55:39 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:14:25.902 09:55:39 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:26.883 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:26.883 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:26.883 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:26.883 09:55:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:14:26.883 09:55:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:14:26.883 09:55:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:14:26.883 09:55:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:14:26.883 09:55:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:14:26.883 09:55:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:14:26.883 09:55:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:14:26.883 09:55:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:14:26.883 09:55:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:14:26.883 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7974840 kB' 'MemAvailable: 9499384 kB' 'Buffers: 2436 kB' 'Cached: 1736364 kB' 'SwapCached: 0 kB' 'Active: 494100 kB' 'Inactive: 1366152 kB' 'Active(anon): 131916 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1366152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 123132 kB' 'Mapped: 48720 kB' 'Shmem: 10464 kB' 'KReclaimable: 66740 kB' 'Slab: 142464 kB' 'SReclaimable: 66740 kB' 'SUnreclaim: 75724 kB' 'KernelStack: 6416 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 354204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54984 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 4018176 kB' 'DirectMap1G: 10485760 kB' 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7974840 kB' 'MemAvailable: 9499384 kB' 'Buffers: 2436 kB' 'Cached: 1736364 kB' 'SwapCached: 0 kB' 'Active: 493964 kB' 'Inactive: 1366152 kB' 'Active(anon): 131780 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1366152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 123044 kB' 'Mapped: 48592 kB' 'Shmem: 10464 kB' 'KReclaimable: 66740 kB' 'Slab: 142464 kB' 'SReclaimable: 66740 kB' 'SUnreclaim: 75724 kB' 'KernelStack: 6400 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54968 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 4018176 kB' 'DirectMap1G: 10485760 kB' 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.884 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7975180 kB' 'MemAvailable: 9499724 kB' 'Buffers: 2436 kB' 'Cached: 1736364 kB' 'SwapCached: 0 kB' 'Active: 493392 kB' 'Inactive: 1366152 kB' 'Active(anon): 131208 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1366152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 122396 kB' 'Mapped: 48592 kB' 'Shmem: 10464 kB' 'KReclaimable: 66740 kB' 'Slab: 142460 kB' 'SReclaimable: 66740 kB' 'SUnreclaim: 75720 kB' 'KernelStack: 6352 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54968 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 4018176 kB' 'DirectMap1G: 10485760 kB' 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.885 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:14:26.886 nr_hugepages=1024 00:14:26.886 resv_hugepages=0 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:14:26.886 surplus_hugepages=0 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:14:26.886 anon_hugepages=0 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:14:26.886 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7975204 kB' 'MemAvailable: 9499748 kB' 'Buffers: 2436 kB' 'Cached: 1736364 kB' 'SwapCached: 0 kB' 'Active: 493548 kB' 'Inactive: 1366152 kB' 'Active(anon): 131364 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1366152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 122536 kB' 'Mapped: 48592 kB' 'Shmem: 10464 kB' 'KReclaimable: 66740 kB' 'Slab: 142460 kB' 'SReclaimable: 66740 kB' 'SUnreclaim: 75720 kB' 'KernelStack: 6336 kB' 'PageTables: 4168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54968 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 4018176 kB' 'DirectMap1G: 10485760 kB' 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.887 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7975204 kB' 'MemUsed: 4266764 kB' 'SwapCached: 0 kB' 'Active: 493548 kB' 'Inactive: 1366152 kB' 'Active(anon): 131364 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1366152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'FilePages: 1738800 kB' 'Mapped: 48592 kB' 'AnonPages: 122536 kB' 'Shmem: 10464 kB' 'KernelStack: 6336 kB' 'PageTables: 4168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66740 kB' 'Slab: 142460 kB' 'SReclaimable: 66740 kB' 'SUnreclaim: 75720 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.888 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.889 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.889 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.889 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.889 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:14:26.889 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:14:26.889 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:14:26.889 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.889 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:14:26.889 09:55:40 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:14:26.889 09:55:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:14:26.889 09:55:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:14:26.889 09:55:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:14:26.889 09:55:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:14:26.889 09:55:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:14:26.889 node0=1024 expecting 1024 00:14:26.889 09:55:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:14:26.889 00:14:26.889 real 0m1.070s 00:14:26.889 user 0m0.463s 00:14:26.889 sys 0m0.585s 00:14:26.889 09:55:40 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:26.889 09:55:40 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:14:26.889 ************************************ 00:14:26.889 END TEST default_setup 00:14:26.889 ************************************ 00:14:27.147 09:55:40 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:14:27.147 09:55:40 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:14:27.147 09:55:40 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:27.147 09:55:40 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:27.147 09:55:40 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:14:27.147 ************************************ 00:14:27.147 START TEST per_node_1G_alloc 00:14:27.147 ************************************ 00:14:27.147 09:55:40 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:14:27.147 09:55:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:14:27.147 09:55:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:14:27.147 09:55:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:14:27.147 09:55:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:14:27.147 09:55:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:14:27.147 09:55:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:14:27.147 09:55:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:14:27.147 09:55:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:14:27.147 09:55:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:14:27.147 09:55:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:14:27.147 09:55:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:14:27.147 09:55:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:14:27.147 09:55:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:14:27.147 09:55:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:14:27.147 09:55:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:14:27.147 09:55:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:14:27.147 09:55:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:14:27.147 09:55:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:14:27.147 09:55:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:14:27.147 09:55:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:14:27.147 09:55:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:14:27.147 09:55:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:14:27.147 09:55:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:14:27.147 09:55:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:14:27.147 09:55:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:27.406 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:27.406 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:27.406 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 9032120 kB' 'MemAvailable: 10556664 kB' 'Buffers: 2436 kB' 'Cached: 1736364 kB' 'SwapCached: 0 kB' 'Active: 493696 kB' 'Inactive: 1366152 kB' 'Active(anon): 131512 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1366152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 122636 kB' 'Mapped: 48788 kB' 'Shmem: 10464 kB' 'KReclaimable: 66740 kB' 'Slab: 142508 kB' 'SReclaimable: 66740 kB' 'SUnreclaim: 75768 kB' 'KernelStack: 6384 kB' 'PageTables: 4320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 353448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55016 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 4018176 kB' 'DirectMap1G: 10485760 kB' 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.667 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 9033092 kB' 'MemAvailable: 10557636 kB' 'Buffers: 2436 kB' 'Cached: 1736364 kB' 'SwapCached: 0 kB' 'Active: 493668 kB' 'Inactive: 1366152 kB' 'Active(anon): 131484 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1366152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 122684 kB' 'Mapped: 48596 kB' 'Shmem: 10464 kB' 'KReclaimable: 66740 kB' 'Slab: 142480 kB' 'SReclaimable: 66740 kB' 'SUnreclaim: 75740 kB' 'KernelStack: 6352 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 353448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54968 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 4018176 kB' 'DirectMap1G: 10485760 kB' 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.668 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 9033092 kB' 'MemAvailable: 10557636 kB' 'Buffers: 2436 kB' 'Cached: 1736364 kB' 'SwapCached: 0 kB' 'Active: 493664 kB' 'Inactive: 1366152 kB' 'Active(anon): 131480 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1366152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 122676 kB' 'Mapped: 48596 kB' 'Shmem: 10464 kB' 'KReclaimable: 66740 kB' 'Slab: 142480 kB' 'SReclaimable: 66740 kB' 'SUnreclaim: 75740 kB' 'KernelStack: 6352 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 353448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54968 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 4018176 kB' 'DirectMap1G: 10485760 kB' 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.669 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:27.670 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:14:27.671 nr_hugepages=512 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:14:27.671 resv_hugepages=0 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:14:27.671 surplus_hugepages=0 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:14:27.671 anon_hugepages=0 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 9032616 kB' 'MemAvailable: 10557160 kB' 'Buffers: 2436 kB' 'Cached: 1736364 kB' 'SwapCached: 0 kB' 'Active: 493684 kB' 'Inactive: 1366152 kB' 'Active(anon): 131500 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1366152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 122676 kB' 'Mapped: 48596 kB' 'Shmem: 10464 kB' 'KReclaimable: 66740 kB' 'Slab: 142476 kB' 'SReclaimable: 66740 kB' 'SUnreclaim: 75736 kB' 'KernelStack: 6352 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 353448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54968 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 4018176 kB' 'DirectMap1G: 10485760 kB' 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.671 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 9032616 kB' 'MemUsed: 3209352 kB' 'SwapCached: 0 kB' 'Active: 493600 kB' 'Inactive: 1366152 kB' 'Active(anon): 131416 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1366152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'FilePages: 1738800 kB' 'Mapped: 48596 kB' 'AnonPages: 122568 kB' 'Shmem: 10464 kB' 'KernelStack: 6336 kB' 'PageTables: 4168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66740 kB' 'Slab: 142464 kB' 'SReclaimable: 66740 kB' 'SUnreclaim: 75724 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.672 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.673 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.673 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.673 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.673 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.673 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.673 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.673 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.673 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.673 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.673 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.673 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.673 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.673 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.673 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.673 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.673 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.673 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.673 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.673 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.673 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.673 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.673 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.673 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.673 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.673 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.673 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.673 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.673 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.673 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.673 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.673 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.673 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.673 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.673 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.673 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.673 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.673 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.673 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.673 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.673 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.673 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.673 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.673 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.673 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.673 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.673 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.673 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:14:27.673 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:27.673 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:27.673 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:27.673 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:14:27.673 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:14:27.673 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:14:27.673 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:14:27.673 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:14:27.673 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:14:27.673 node0=512 expecting 512 00:14:27.673 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:14:27.673 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:14:27.673 00:14:27.673 real 0m0.644s 00:14:27.673 user 0m0.286s 00:14:27.673 sys 0m0.398s 00:14:27.673 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:27.673 09:55:41 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:14:27.673 ************************************ 00:14:27.673 END TEST per_node_1G_alloc 00:14:27.673 ************************************ 00:14:27.673 09:55:41 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:14:27.673 09:55:41 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:14:27.673 09:55:41 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:27.673 09:55:41 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:27.673 09:55:41 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:14:27.673 ************************************ 00:14:27.673 START TEST even_2G_alloc 00:14:27.673 ************************************ 00:14:27.673 09:55:41 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:14:27.673 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:14:27.673 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:14:27.673 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:14:27.673 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:14:27.673 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:14:27.673 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:14:27.673 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:14:27.673 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:14:27.673 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:14:27.673 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:14:27.673 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:14:27.673 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:14:27.673 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:14:27.673 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:14:27.673 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:14:27.673 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:14:27.673 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:14:27.673 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:14:27.673 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:14:27.673 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:14:27.673 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:14:27.673 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:14:27.673 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:14:27.673 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:28.241 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:28.241 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:28.241 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7983440 kB' 'MemAvailable: 9507984 kB' 'Buffers: 2436 kB' 'Cached: 1736364 kB' 'SwapCached: 0 kB' 'Active: 493624 kB' 'Inactive: 1366152 kB' 'Active(anon): 131440 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1366152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 122776 kB' 'Mapped: 48724 kB' 'Shmem: 10464 kB' 'KReclaimable: 66740 kB' 'Slab: 142488 kB' 'SReclaimable: 66740 kB' 'SUnreclaim: 75748 kB' 'KernelStack: 6308 kB' 'PageTables: 4144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55000 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 4018176 kB' 'DirectMap1G: 10485760 kB' 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.241 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7984060 kB' 'MemAvailable: 9508604 kB' 'Buffers: 2436 kB' 'Cached: 1736364 kB' 'SwapCached: 0 kB' 'Active: 493440 kB' 'Inactive: 1366152 kB' 'Active(anon): 131256 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1366152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 122660 kB' 'Mapped: 48596 kB' 'Shmem: 10464 kB' 'KReclaimable: 66740 kB' 'Slab: 142488 kB' 'SReclaimable: 66740 kB' 'SUnreclaim: 75748 kB' 'KernelStack: 6352 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54984 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 4018176 kB' 'DirectMap1G: 10485760 kB' 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.242 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7984060 kB' 'MemAvailable: 9508604 kB' 'Buffers: 2436 kB' 'Cached: 1736364 kB' 'SwapCached: 0 kB' 'Active: 493420 kB' 'Inactive: 1366152 kB' 'Active(anon): 131236 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1366152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 122656 kB' 'Mapped: 48596 kB' 'Shmem: 10464 kB' 'KReclaimable: 66740 kB' 'Slab: 142484 kB' 'SReclaimable: 66740 kB' 'SUnreclaim: 75744 kB' 'KernelStack: 6352 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54984 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 4018176 kB' 'DirectMap1G: 10485760 kB' 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.243 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:14:28.244 nr_hugepages=1024 00:14:28.244 resv_hugepages=0 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:14:28.244 surplus_hugepages=0 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:14:28.244 anon_hugepages=0 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:28.244 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7984060 kB' 'MemAvailable: 9508604 kB' 'Buffers: 2436 kB' 'Cached: 1736364 kB' 'SwapCached: 0 kB' 'Active: 493380 kB' 'Inactive: 1366152 kB' 'Active(anon): 131196 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1366152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 122568 kB' 'Mapped: 48596 kB' 'Shmem: 10464 kB' 'KReclaimable: 66740 kB' 'Slab: 142484 kB' 'SReclaimable: 66740 kB' 'SUnreclaim: 75744 kB' 'KernelStack: 6336 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54984 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 4018176 kB' 'DirectMap1G: 10485760 kB' 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.504 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7984060 kB' 'MemUsed: 4257908 kB' 'SwapCached: 0 kB' 'Active: 493480 kB' 'Inactive: 1366152 kB' 'Active(anon): 131296 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1366152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'FilePages: 1738800 kB' 'Mapped: 48596 kB' 'AnonPages: 122664 kB' 'Shmem: 10464 kB' 'KernelStack: 6336 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66740 kB' 'Slab: 142484 kB' 'SReclaimable: 66740 kB' 'SUnreclaim: 75744 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.505 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.506 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.507 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.507 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.507 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.507 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.507 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:14:28.507 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:28.507 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:28.507 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:28.507 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:14:28.507 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:14:28.507 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:14:28.507 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:14:28.507 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:14:28.507 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:14:28.507 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:14:28.507 node0=1024 expecting 1024 00:14:28.507 09:55:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:14:28.507 00:14:28.507 real 0m0.678s 00:14:28.507 user 0m0.321s 00:14:28.507 sys 0m0.401s 00:14:28.507 09:55:41 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:28.507 09:55:41 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:14:28.507 ************************************ 00:14:28.507 END TEST even_2G_alloc 00:14:28.507 ************************************ 00:14:28.507 09:55:41 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:14:28.507 09:55:41 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:14:28.507 09:55:41 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:28.507 09:55:41 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:28.507 09:55:41 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:14:28.507 ************************************ 00:14:28.507 START TEST odd_alloc 00:14:28.507 ************************************ 00:14:28.507 09:55:41 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:14:28.507 09:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:14:28.507 09:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:14:28.507 09:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:14:28.507 09:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:14:28.507 09:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:14:28.507 09:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:14:28.507 09:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:14:28.507 09:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:14:28.507 09:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:14:28.507 09:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:14:28.507 09:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:14:28.507 09:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:14:28.507 09:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:14:28.507 09:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:14:28.507 09:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:14:28.507 09:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:14:28.507 09:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:14:28.507 09:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:14:28.507 09:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:14:28.507 09:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:14:28.507 09:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:14:28.507 09:55:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:14:28.507 09:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:14:28.507 09:55:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:29.079 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:29.079 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:29.079 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7983964 kB' 'MemAvailable: 9508512 kB' 'Buffers: 2436 kB' 'Cached: 1736368 kB' 'SwapCached: 0 kB' 'Active: 493680 kB' 'Inactive: 1366156 kB' 'Active(anon): 131496 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1366156 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 122628 kB' 'Mapped: 48732 kB' 'Shmem: 10464 kB' 'KReclaimable: 66740 kB' 'Slab: 142532 kB' 'SReclaimable: 66740 kB' 'SUnreclaim: 75792 kB' 'KernelStack: 6368 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 353448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55016 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 4018176 kB' 'DirectMap1G: 10485760 kB' 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.079 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7983964 kB' 'MemAvailable: 9508512 kB' 'Buffers: 2436 kB' 'Cached: 1736368 kB' 'SwapCached: 0 kB' 'Active: 493440 kB' 'Inactive: 1366156 kB' 'Active(anon): 131256 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1366156 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 122432 kB' 'Mapped: 48600 kB' 'Shmem: 10464 kB' 'KReclaimable: 66740 kB' 'Slab: 142520 kB' 'SReclaimable: 66740 kB' 'SUnreclaim: 75780 kB' 'KernelStack: 6352 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 353448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55000 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 4018176 kB' 'DirectMap1G: 10485760 kB' 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.080 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.081 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7990296 kB' 'MemAvailable: 9514844 kB' 'Buffers: 2436 kB' 'Cached: 1736368 kB' 'SwapCached: 0 kB' 'Active: 493352 kB' 'Inactive: 1366156 kB' 'Active(anon): 131168 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1366156 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 122560 kB' 'Mapped: 48600 kB' 'Shmem: 10464 kB' 'KReclaimable: 66740 kB' 'Slab: 142520 kB' 'SReclaimable: 66740 kB' 'SUnreclaim: 75780 kB' 'KernelStack: 6320 kB' 'PageTables: 4128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 353448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55000 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 4018176 kB' 'DirectMap1G: 10485760 kB' 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.082 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.083 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:14:29.084 nr_hugepages=1025 00:14:29.084 resv_hugepages=0 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:14:29.084 surplus_hugepages=0 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:14:29.084 anon_hugepages=0 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7990296 kB' 'MemAvailable: 9514844 kB' 'Buffers: 2436 kB' 'Cached: 1736368 kB' 'SwapCached: 0 kB' 'Active: 493688 kB' 'Inactive: 1366156 kB' 'Active(anon): 131504 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1366156 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 122692 kB' 'Mapped: 48600 kB' 'Shmem: 10464 kB' 'KReclaimable: 66740 kB' 'Slab: 142520 kB' 'SReclaimable: 66740 kB' 'SUnreclaim: 75780 kB' 'KernelStack: 6352 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 353448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55000 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 4018176 kB' 'DirectMap1G: 10485760 kB' 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.084 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:14:29.085 09:55:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7990296 kB' 'MemUsed: 4251672 kB' 'SwapCached: 0 kB' 'Active: 493392 kB' 'Inactive: 1366156 kB' 'Active(anon): 131208 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1366156 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'FilePages: 1738804 kB' 'Mapped: 48600 kB' 'AnonPages: 122592 kB' 'Shmem: 10464 kB' 'KernelStack: 6336 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66740 kB' 'Slab: 142520 kB' 'SReclaimable: 66740 kB' 'SUnreclaim: 75780 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.086 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.087 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.087 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.087 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.087 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.087 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.087 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.087 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.087 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.087 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.087 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.087 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.087 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.087 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.087 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.087 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.087 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.087 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.087 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.087 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.087 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.087 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.087 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.087 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.087 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.087 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.087 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.087 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.087 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.087 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.087 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.087 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.087 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.087 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.087 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.087 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.087 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:14:29.087 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.087 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.087 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.087 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:14:29.087 09:55:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:14:29.087 09:55:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:14:29.087 09:55:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:14:29.087 09:55:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:14:29.087 09:55:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:14:29.087 09:55:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:14:29.087 node0=1025 expecting 1025 00:14:29.087 09:55:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:14:29.087 00:14:29.087 real 0m0.687s 00:14:29.087 user 0m0.330s 00:14:29.087 sys 0m0.398s 00:14:29.087 09:55:42 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:29.087 09:55:42 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:14:29.087 ************************************ 00:14:29.087 END TEST odd_alloc 00:14:29.087 ************************************ 00:14:29.346 09:55:42 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:14:29.346 09:55:42 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:14:29.346 09:55:42 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:29.346 09:55:42 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:29.346 09:55:42 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:14:29.346 ************************************ 00:14:29.346 START TEST custom_alloc 00:14:29.346 ************************************ 00:14:29.346 09:55:42 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:14:29.346 09:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:14:29.346 09:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:14:29.346 09:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:14:29.346 09:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:14:29.346 09:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:14:29.346 09:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:14:29.346 09:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:14:29.346 09:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:14:29.346 09:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:14:29.346 09:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:14:29.346 09:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:14:29.346 09:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:14:29.346 09:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:14:29.346 09:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:14:29.346 09:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:14:29.346 09:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:14:29.346 09:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:14:29.346 09:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:14:29.346 09:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:14:29.346 09:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:14:29.346 09:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:14:29.346 09:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:14:29.346 09:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:14:29.346 09:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:14:29.346 09:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:14:29.346 09:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:14:29.346 09:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:14:29.346 09:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:14:29.346 09:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:14:29.346 09:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:14:29.346 09:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:14:29.346 09:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:14:29.346 09:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:14:29.346 09:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:14:29.346 09:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:14:29.346 09:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:14:29.346 09:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:14:29.346 09:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:14:29.346 09:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:14:29.346 09:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:14:29.346 09:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:14:29.346 09:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:14:29.346 09:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:14:29.346 09:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:14:29.346 09:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:29.606 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:29.875 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:29.875 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 9033708 kB' 'MemAvailable: 10558256 kB' 'Buffers: 2436 kB' 'Cached: 1736368 kB' 'SwapCached: 0 kB' 'Active: 493976 kB' 'Inactive: 1366156 kB' 'Active(anon): 131792 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1366156 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 122664 kB' 'Mapped: 48728 kB' 'Shmem: 10464 kB' 'KReclaimable: 66740 kB' 'Slab: 142520 kB' 'SReclaimable: 66740 kB' 'SUnreclaim: 75780 kB' 'KernelStack: 6368 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 353448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55000 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 4018176 kB' 'DirectMap1G: 10485760 kB' 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.875 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.876 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 9033708 kB' 'MemAvailable: 10558256 kB' 'Buffers: 2436 kB' 'Cached: 1736368 kB' 'SwapCached: 0 kB' 'Active: 493460 kB' 'Inactive: 1366156 kB' 'Active(anon): 131276 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1366156 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 122712 kB' 'Mapped: 48600 kB' 'Shmem: 10464 kB' 'KReclaimable: 66740 kB' 'Slab: 142520 kB' 'SReclaimable: 66740 kB' 'SUnreclaim: 75780 kB' 'KernelStack: 6352 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 353448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54968 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 4018176 kB' 'DirectMap1G: 10485760 kB' 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.877 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 9033708 kB' 'MemAvailable: 10558256 kB' 'Buffers: 2436 kB' 'Cached: 1736368 kB' 'SwapCached: 0 kB' 'Active: 493440 kB' 'Inactive: 1366156 kB' 'Active(anon): 131256 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1366156 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 122648 kB' 'Mapped: 48600 kB' 'Shmem: 10464 kB' 'KReclaimable: 66740 kB' 'Slab: 142516 kB' 'SReclaimable: 66740 kB' 'SUnreclaim: 75776 kB' 'KernelStack: 6336 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 353448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54984 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 4018176 kB' 'DirectMap1G: 10485760 kB' 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.878 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.879 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:14:29.880 nr_hugepages=512 00:14:29.880 resv_hugepages=0 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:14:29.880 surplus_hugepages=0 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:14:29.880 anon_hugepages=0 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 9033708 kB' 'MemAvailable: 10558256 kB' 'Buffers: 2436 kB' 'Cached: 1736368 kB' 'SwapCached: 0 kB' 'Active: 493428 kB' 'Inactive: 1366156 kB' 'Active(anon): 131244 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1366156 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 122632 kB' 'Mapped: 48600 kB' 'Shmem: 10464 kB' 'KReclaimable: 66740 kB' 'Slab: 142516 kB' 'SReclaimable: 66740 kB' 'SUnreclaim: 75776 kB' 'KernelStack: 6336 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 353448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54984 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 4018176 kB' 'DirectMap1G: 10485760 kB' 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.880 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.881 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 9033708 kB' 'MemUsed: 3208260 kB' 'SwapCached: 0 kB' 'Active: 493440 kB' 'Inactive: 1366156 kB' 'Active(anon): 131256 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1366156 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'FilePages: 1738804 kB' 'Mapped: 48600 kB' 'AnonPages: 122676 kB' 'Shmem: 10464 kB' 'KernelStack: 6336 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66740 kB' 'Slab: 142516 kB' 'SReclaimable: 66740 kB' 'SUnreclaim: 75776 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.882 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:14:29.883 node0=512 expecting 512 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:14:29.883 00:14:29.883 real 0m0.685s 00:14:29.883 user 0m0.300s 00:14:29.883 sys 0m0.425s 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:29.883 09:55:43 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:14:29.883 ************************************ 00:14:29.883 END TEST custom_alloc 00:14:29.883 ************************************ 00:14:29.883 09:55:43 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:14:29.883 09:55:43 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:14:29.883 09:55:43 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:29.883 09:55:43 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:29.883 09:55:43 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:14:29.883 ************************************ 00:14:29.883 START TEST no_shrink_alloc 00:14:29.883 ************************************ 00:14:29.883 09:55:43 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:14:29.883 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:14:29.883 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:14:29.883 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:14:29.883 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:14:29.883 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:14:29.883 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:14:29.883 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:14:29.883 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:14:29.883 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:14:29.883 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:14:29.883 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:14:29.883 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:14:29.884 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:14:29.884 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:14:29.884 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:14:29.884 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:14:29.884 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:14:29.884 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:14:29.884 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:14:29.884 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:14:29.884 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:14:29.884 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:30.455 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:30.455 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:30.455 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:30.455 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:14:30.455 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:14:30.455 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:14:30.455 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:14:30.455 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7990360 kB' 'MemAvailable: 9514904 kB' 'Buffers: 2436 kB' 'Cached: 1736368 kB' 'SwapCached: 0 kB' 'Active: 489720 kB' 'Inactive: 1366156 kB' 'Active(anon): 127536 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1366156 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'AnonPages: 118648 kB' 'Mapped: 47952 kB' 'Shmem: 10464 kB' 'KReclaimable: 66736 kB' 'Slab: 142328 kB' 'SReclaimable: 66736 kB' 'SUnreclaim: 75592 kB' 'KernelStack: 6212 kB' 'PageTables: 3708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54920 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 4018176 kB' 'DirectMap1G: 10485760 kB' 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.456 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.457 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7990360 kB' 'MemAvailable: 9514904 kB' 'Buffers: 2436 kB' 'Cached: 1736368 kB' 'SwapCached: 0 kB' 'Active: 489336 kB' 'Inactive: 1366156 kB' 'Active(anon): 127152 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1366156 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'AnonPages: 118288 kB' 'Mapped: 47856 kB' 'Shmem: 10464 kB' 'KReclaimable: 66736 kB' 'Slab: 142316 kB' 'SReclaimable: 66736 kB' 'SUnreclaim: 75580 kB' 'KernelStack: 6272 kB' 'PageTables: 3812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54888 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 4018176 kB' 'DirectMap1G: 10485760 kB' 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.458 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.459 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.460 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.460 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.460 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.460 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.460 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.460 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.460 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.460 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.460 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.460 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.460 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.460 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.460 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.460 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.460 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.460 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.460 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.460 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.460 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.460 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.460 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.460 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.460 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.460 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.460 09:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.460 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.460 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.460 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.460 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.460 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.460 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.460 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.460 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:14:30.460 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:14:30.460 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:14:30.460 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:14:30.460 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:14:30.460 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:14:30.460 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:14:30.460 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:30.460 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:30.460 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:30.460 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:30.460 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:30.460 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:30.460 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.460 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.460 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7991588 kB' 'MemAvailable: 9516132 kB' 'Buffers: 2436 kB' 'Cached: 1736368 kB' 'SwapCached: 0 kB' 'Active: 489336 kB' 'Inactive: 1366156 kB' 'Active(anon): 127152 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1366156 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'AnonPages: 118288 kB' 'Mapped: 47856 kB' 'Shmem: 10464 kB' 'KReclaimable: 66736 kB' 'Slab: 142316 kB' 'SReclaimable: 66736 kB' 'SUnreclaim: 75580 kB' 'KernelStack: 6272 kB' 'PageTables: 3812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54888 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 4018176 kB' 'DirectMap1G: 10485760 kB' 00:14:30.460 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:30.460 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.460 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.460 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.460 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:30.460 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.460 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.460 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.460 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:30.460 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.460 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.460 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.460 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:30.460 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.460 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.460 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.460 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:30.460 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.460 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.460 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.460 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:30.460 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.460 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.460 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.460 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:30.460 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.460 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.460 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.460 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:30.460 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.460 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.460 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.460 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.461 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.462 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.463 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:30.463 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.463 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.463 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.463 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:30.463 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:14:30.463 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:14:30.463 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:14:30.463 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:14:30.463 nr_hugepages=1024 00:14:30.463 resv_hugepages=0 00:14:30.463 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:14:30.463 surplus_hugepages=0 00:14:30.463 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:14:30.463 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:14:30.463 anon_hugepages=0 00:14:30.463 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:14:30.463 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:14:30.463 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:14:30.463 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:14:30.463 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:14:30.463 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:14:30.463 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:30.463 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:30.463 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:30.463 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:30.463 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:30.463 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:30.463 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.463 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7991620 kB' 'MemAvailable: 9516164 kB' 'Buffers: 2436 kB' 'Cached: 1736368 kB' 'SwapCached: 0 kB' 'Active: 489476 kB' 'Inactive: 1366156 kB' 'Active(anon): 127292 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1366156 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'AnonPages: 118444 kB' 'Mapped: 47856 kB' 'Shmem: 10464 kB' 'KReclaimable: 66736 kB' 'Slab: 142312 kB' 'SReclaimable: 66736 kB' 'SUnreclaim: 75576 kB' 'KernelStack: 6240 kB' 'PageTables: 3716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54888 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 4018176 kB' 'DirectMap1G: 10485760 kB' 00:14:30.463 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.463 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:30.463 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.463 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.463 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.463 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:30.463 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.733 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7992980 kB' 'MemUsed: 4248988 kB' 'SwapCached: 0 kB' 'Active: 489236 kB' 'Inactive: 1366156 kB' 'Active(anon): 127052 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1366156 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'FilePages: 1738804 kB' 'Mapped: 47856 kB' 'AnonPages: 118416 kB' 'Shmem: 10464 kB' 'KernelStack: 6240 kB' 'PageTables: 3716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66736 kB' 'Slab: 142308 kB' 'SReclaimable: 66736 kB' 'SUnreclaim: 75572 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.734 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:30.735 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:30.736 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:30.736 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:30.736 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:14:30.736 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:14:30.736 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:14:30.736 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:14:30.736 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:14:30.736 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:14:30.736 node0=1024 expecting 1024 00:14:30.736 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:14:30.736 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:14:30.736 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:14:30.736 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:14:30.736 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:14:30.736 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:14:30.736 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:30.995 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:31.259 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:31.259 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:31.259 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7995944 kB' 'MemAvailable: 9520488 kB' 'Buffers: 2436 kB' 'Cached: 1736368 kB' 'SwapCached: 0 kB' 'Active: 489900 kB' 'Inactive: 1366156 kB' 'Active(anon): 127716 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1366156 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 118896 kB' 'Mapped: 47892 kB' 'Shmem: 10464 kB' 'KReclaimable: 66736 kB' 'Slab: 142272 kB' 'SReclaimable: 66736 kB' 'SUnreclaim: 75536 kB' 'KernelStack: 6260 kB' 'PageTables: 3800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54920 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 4018176 kB' 'DirectMap1G: 10485760 kB' 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.259 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.260 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7995692 kB' 'MemAvailable: 9520236 kB' 'Buffers: 2436 kB' 'Cached: 1736368 kB' 'SwapCached: 0 kB' 'Active: 489652 kB' 'Inactive: 1366156 kB' 'Active(anon): 127468 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1366156 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 118364 kB' 'Mapped: 47796 kB' 'Shmem: 10464 kB' 'KReclaimable: 66736 kB' 'Slab: 142272 kB' 'SReclaimable: 66736 kB' 'SUnreclaim: 75536 kB' 'KernelStack: 6272 kB' 'PageTables: 3772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54888 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 4018176 kB' 'DirectMap1G: 10485760 kB' 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.261 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7996112 kB' 'MemAvailable: 9520656 kB' 'Buffers: 2436 kB' 'Cached: 1736368 kB' 'SwapCached: 0 kB' 'Active: 489680 kB' 'Inactive: 1366156 kB' 'Active(anon): 127496 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1366156 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 118384 kB' 'Mapped: 47796 kB' 'Shmem: 10464 kB' 'KReclaimable: 66736 kB' 'Slab: 142272 kB' 'SReclaimable: 66736 kB' 'SUnreclaim: 75536 kB' 'KernelStack: 6272 kB' 'PageTables: 3772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54888 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 4018176 kB' 'DirectMap1G: 10485760 kB' 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.262 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.263 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:14:31.264 nr_hugepages=1024 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:14:31.264 resv_hugepages=0 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:14:31.264 surplus_hugepages=0 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:14:31.264 anon_hugepages=0 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7996364 kB' 'MemAvailable: 9520908 kB' 'Buffers: 2436 kB' 'Cached: 1736368 kB' 'SwapCached: 0 kB' 'Active: 489368 kB' 'Inactive: 1366156 kB' 'Active(anon): 127184 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1366156 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 118332 kB' 'Mapped: 47796 kB' 'Shmem: 10464 kB' 'KReclaimable: 66736 kB' 'Slab: 142272 kB' 'SReclaimable: 66736 kB' 'SUnreclaim: 75536 kB' 'KernelStack: 6256 kB' 'PageTables: 3724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54888 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 4018176 kB' 'DirectMap1G: 10485760 kB' 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:31.264 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.265 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7996364 kB' 'MemUsed: 4245604 kB' 'SwapCached: 0 kB' 'Active: 489400 kB' 'Inactive: 1366156 kB' 'Active(anon): 127216 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1366156 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'FilePages: 1738804 kB' 'Mapped: 47796 kB' 'AnonPages: 118364 kB' 'Shmem: 10464 kB' 'KernelStack: 6272 kB' 'PageTables: 3772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66736 kB' 'Slab: 142272 kB' 'SReclaimable: 66736 kB' 'SUnreclaim: 75536 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.266 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:14:31.267 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:14:31.268 node0=1024 expecting 1024 00:14:31.268 09:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:14:31.268 00:14:31.268 real 0m1.388s 00:14:31.268 user 0m0.635s 00:14:31.268 sys 0m0.813s 00:14:31.268 09:55:44 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:31.268 09:55:44 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:14:31.268 ************************************ 00:14:31.268 END TEST no_shrink_alloc 00:14:31.268 ************************************ 00:14:31.526 09:55:44 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:14:31.526 09:55:44 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:14:31.526 09:55:44 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:14:31.526 09:55:44 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:14:31.526 09:55:44 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:14:31.526 09:55:44 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:14:31.526 09:55:44 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:14:31.526 09:55:44 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:14:31.526 09:55:44 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:14:31.526 09:55:44 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:14:31.526 00:14:31.526 real 0m5.684s 00:14:31.526 user 0m2.534s 00:14:31.526 sys 0m3.365s 00:14:31.526 09:55:44 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:31.526 09:55:44 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:14:31.526 ************************************ 00:14:31.526 END TEST hugepages 00:14:31.526 ************************************ 00:14:31.526 09:55:44 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:14:31.526 09:55:44 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:14:31.526 09:55:44 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:31.526 09:55:44 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:31.526 09:55:44 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:14:31.526 ************************************ 00:14:31.526 START TEST driver 00:14:31.526 ************************************ 00:14:31.526 09:55:44 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:14:31.526 * Looking for test storage... 00:14:31.526 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:14:31.526 09:55:45 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:14:31.526 09:55:45 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:14:31.526 09:55:45 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:32.462 09:55:45 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:14:32.462 09:55:45 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:32.462 09:55:45 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:32.462 09:55:45 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:14:32.462 ************************************ 00:14:32.462 START TEST guess_driver 00:14:32.462 ************************************ 00:14:32.462 09:55:45 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:14:32.462 09:55:45 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:14:32.462 09:55:45 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:14:32.462 09:55:45 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:14:32.462 09:55:45 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:14:32.462 09:55:45 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:14:32.462 09:55:45 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:14:32.462 09:55:45 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:14:32.462 09:55:45 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:14:32.462 09:55:45 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:14:32.462 09:55:45 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:14:32.462 09:55:45 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:14:32.462 09:55:45 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:14:32.462 09:55:45 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:14:32.462 09:55:45 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:14:32.462 09:55:45 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:14:32.462 09:55:45 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:14:32.462 09:55:45 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:14:32.462 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:14:32.462 09:55:45 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:14:32.462 09:55:45 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:14:32.462 09:55:45 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:14:32.462 09:55:45 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:14:32.462 Looking for driver=uio_pci_generic 00:14:32.462 09:55:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:14:32.462 09:55:45 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:14:32.462 09:55:45 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:14:32.462 09:55:45 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:14:33.028 09:55:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:14:33.029 09:55:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:14:33.029 09:55:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:14:33.288 09:55:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:14:33.288 09:55:46 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:14:33.288 09:55:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:14:33.288 09:55:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:14:33.288 09:55:46 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:14:33.288 09:55:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:14:33.288 09:55:46 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:14:33.288 09:55:46 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:14:33.288 09:55:46 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:14:33.288 09:55:46 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:34.223 00:14:34.223 real 0m1.691s 00:14:34.223 user 0m0.610s 00:14:34.223 sys 0m1.131s 00:14:34.223 ************************************ 00:14:34.223 END TEST guess_driver 00:14:34.223 ************************************ 00:14:34.223 09:55:47 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:34.223 09:55:47 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:14:34.224 09:55:47 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:14:34.224 ************************************ 00:14:34.224 END TEST driver 00:14:34.224 ************************************ 00:14:34.224 00:14:34.224 real 0m2.595s 00:14:34.224 user 0m0.915s 00:14:34.224 sys 0m1.822s 00:14:34.224 09:55:47 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:34.224 09:55:47 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:14:34.224 09:55:47 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:14:34.224 09:55:47 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:14:34.224 09:55:47 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:34.224 09:55:47 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:34.224 09:55:47 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:14:34.224 ************************************ 00:14:34.224 START TEST devices 00:14:34.224 ************************************ 00:14:34.224 09:55:47 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:14:34.224 * Looking for test storage... 00:14:34.224 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:14:34.224 09:55:47 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:14:34.224 09:55:47 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:14:34.224 09:55:47 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:14:34.224 09:55:47 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:35.161 09:55:48 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:14:35.161 09:55:48 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:14:35.161 09:55:48 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:14:35.161 09:55:48 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:14:35.161 09:55:48 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:35.161 09:55:48 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:14:35.161 09:55:48 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:14:35.161 09:55:48 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:14:35.161 09:55:48 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:35.161 09:55:48 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:35.161 09:55:48 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:14:35.161 09:55:48 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:14:35.161 09:55:48 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:14:35.161 09:55:48 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:35.161 09:55:48 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:35.161 09:55:48 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:14:35.161 09:55:48 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:14:35.161 09:55:48 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:14:35.161 09:55:48 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:35.161 09:55:48 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:35.161 09:55:48 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:14:35.161 09:55:48 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:14:35.161 09:55:48 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:14:35.161 09:55:48 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:35.161 09:55:48 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:14:35.161 09:55:48 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:14:35.161 09:55:48 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:14:35.161 09:55:48 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:14:35.161 09:55:48 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:14:35.161 09:55:48 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:14:35.161 09:55:48 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:14:35.161 09:55:48 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:14:35.161 09:55:48 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:14:35.161 09:55:48 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:14:35.161 09:55:48 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:14:35.161 09:55:48 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:14:35.161 09:55:48 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:14:35.161 No valid GPT data, bailing 00:14:35.161 09:55:48 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:14:35.161 09:55:48 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:14:35.161 09:55:48 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:14:35.161 09:55:48 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:14:35.161 09:55:48 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:14:35.161 09:55:48 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:14:35.161 09:55:48 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:14:35.161 09:55:48 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:14:35.161 09:55:48 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:14:35.161 09:55:48 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:14:35.161 09:55:48 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:14:35.161 09:55:48 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:14:35.161 09:55:48 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:14:35.161 09:55:48 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:14:35.161 09:55:48 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:14:35.161 09:55:48 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:14:35.161 09:55:48 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:14:35.161 09:55:48 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:14:35.161 No valid GPT data, bailing 00:14:35.161 09:55:48 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:14:35.421 09:55:48 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:14:35.421 09:55:48 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:14:35.421 09:55:48 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:14:35.421 09:55:48 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:14:35.421 09:55:48 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:14:35.421 09:55:48 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:14:35.421 09:55:48 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:14:35.421 09:55:48 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:14:35.421 09:55:48 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:14:35.421 09:55:48 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:14:35.421 09:55:48 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:14:35.421 09:55:48 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:14:35.421 09:55:48 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:14:35.421 09:55:48 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:14:35.421 09:55:48 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:14:35.421 09:55:48 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:14:35.421 09:55:48 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:14:35.421 No valid GPT data, bailing 00:14:35.421 09:55:48 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:14:35.421 09:55:48 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:14:35.421 09:55:48 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:14:35.421 09:55:48 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:14:35.421 09:55:48 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:14:35.421 09:55:48 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:14:35.421 09:55:48 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:14:35.421 09:55:48 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:14:35.421 09:55:48 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:14:35.421 09:55:48 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:14:35.421 09:55:48 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:14:35.421 09:55:48 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:14:35.421 09:55:48 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:14:35.421 09:55:48 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:14:35.421 09:55:48 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:14:35.421 09:55:48 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:14:35.421 09:55:48 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:14:35.421 09:55:48 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:14:35.421 No valid GPT data, bailing 00:14:35.421 09:55:48 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:14:35.421 09:55:48 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:14:35.421 09:55:48 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:14:35.421 09:55:48 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:14:35.421 09:55:48 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:14:35.421 09:55:48 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:14:35.421 09:55:48 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:14:35.421 09:55:48 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:14:35.421 09:55:48 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:14:35.421 09:55:48 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:14:35.421 09:55:48 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:14:35.421 09:55:48 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:14:35.421 09:55:48 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:14:35.421 09:55:48 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:35.421 09:55:48 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:35.421 09:55:48 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:14:35.421 ************************************ 00:14:35.421 START TEST nvme_mount 00:14:35.421 ************************************ 00:14:35.421 09:55:48 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:14:35.421 09:55:48 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:14:35.421 09:55:48 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:14:35.421 09:55:48 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:35.421 09:55:48 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:14:35.421 09:55:48 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:14:35.421 09:55:48 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:14:35.421 09:55:48 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:14:35.421 09:55:48 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:14:35.421 09:55:48 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:14:35.421 09:55:48 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:14:35.421 09:55:48 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:14:35.421 09:55:48 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:14:35.421 09:55:48 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:14:35.421 09:55:48 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:14:35.421 09:55:48 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:14:35.421 09:55:48 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:14:35.421 09:55:48 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:14:35.421 09:55:48 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:14:35.421 09:55:48 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:14:36.799 Creating new GPT entries in memory. 00:14:36.799 GPT data structures destroyed! You may now partition the disk using fdisk or 00:14:36.799 other utilities. 00:14:36.799 09:55:49 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:14:36.799 09:55:49 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:14:36.799 09:55:49 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:14:36.799 09:55:49 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:14:36.799 09:55:49 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:14:37.737 Creating new GPT entries in memory. 00:14:37.738 The operation has completed successfully. 00:14:37.738 09:55:51 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:14:37.738 09:55:51 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:14:37.738 09:55:51 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 58953 00:14:37.738 09:55:51 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:37.738 09:55:51 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:14:37.738 09:55:51 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:37.738 09:55:51 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:14:37.738 09:55:51 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:14:37.738 09:55:51 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:37.738 09:55:51 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:14:37.738 09:55:51 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:14:37.738 09:55:51 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:14:37.738 09:55:51 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:37.738 09:55:51 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:14:37.738 09:55:51 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:14:37.738 09:55:51 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:14:37.738 09:55:51 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:14:37.738 09:55:51 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:14:37.738 09:55:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:37.738 09:55:51 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:14:37.738 09:55:51 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:14:37.738 09:55:51 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:14:37.738 09:55:51 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:14:37.997 09:55:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:37.997 09:55:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:14:37.997 09:55:51 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:14:37.997 09:55:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:37.997 09:55:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:37.997 09:55:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:37.997 09:55:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:37.997 09:55:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:38.255 09:55:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:38.255 09:55:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:38.255 09:55:51 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:14:38.255 09:55:51 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:14:38.255 09:55:51 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:38.255 09:55:51 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:14:38.255 09:55:51 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:14:38.255 09:55:51 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:14:38.256 09:55:51 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:38.256 09:55:51 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:38.256 09:55:51 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:14:38.256 09:55:51 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:14:38.256 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:14:38.256 09:55:51 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:14:38.256 09:55:51 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:14:38.515 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:14:38.515 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:14:38.515 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:14:38.515 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:14:38.515 09:55:51 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:14:38.515 09:55:51 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:14:38.515 09:55:51 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:38.515 09:55:51 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:14:38.515 09:55:51 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:14:38.515 09:55:51 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:38.515 09:55:52 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:14:38.515 09:55:52 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:14:38.515 09:55:52 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:14:38.515 09:55:52 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:38.515 09:55:52 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:14:38.515 09:55:52 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:14:38.515 09:55:52 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:14:38.515 09:55:52 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:14:38.515 09:55:52 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:14:38.515 09:55:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:38.515 09:55:52 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:14:38.515 09:55:52 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:14:38.515 09:55:52 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:14:38.515 09:55:52 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:14:38.774 09:55:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:38.774 09:55:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:14:38.774 09:55:52 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:14:38.774 09:55:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:38.774 09:55:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:38.774 09:55:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:39.034 09:55:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:39.034 09:55:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:39.034 09:55:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:39.034 09:55:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:39.293 09:55:52 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:14:39.293 09:55:52 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:14:39.294 09:55:52 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:39.294 09:55:52 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:14:39.294 09:55:52 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:14:39.294 09:55:52 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:39.294 09:55:52 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:14:39.294 09:55:52 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:14:39.294 09:55:52 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:14:39.294 09:55:52 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:14:39.294 09:55:52 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:14:39.294 09:55:52 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:14:39.294 09:55:52 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:14:39.294 09:55:52 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:14:39.294 09:55:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:39.294 09:55:52 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:14:39.294 09:55:52 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:14:39.294 09:55:52 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:14:39.294 09:55:52 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:14:39.553 09:55:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:39.553 09:55:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:14:39.553 09:55:53 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:14:39.553 09:55:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:39.553 09:55:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:39.553 09:55:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:39.813 09:55:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:39.813 09:55:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:39.813 09:55:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:39.813 09:55:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:39.813 09:55:53 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:14:39.813 09:55:53 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:14:39.813 09:55:53 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:14:39.813 09:55:53 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:14:39.813 09:55:53 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:39.813 09:55:53 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:14:39.813 09:55:53 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:14:39.813 09:55:53 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:14:39.813 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:14:39.813 00:14:39.813 real 0m4.454s 00:14:39.813 user 0m0.831s 00:14:39.813 sys 0m1.373s 00:14:39.813 09:55:53 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:39.813 09:55:53 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:14:39.813 ************************************ 00:14:39.813 END TEST nvme_mount 00:14:39.813 ************************************ 00:14:40.072 09:55:53 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:14:40.072 09:55:53 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:14:40.072 09:55:53 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:40.072 09:55:53 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:40.072 09:55:53 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:14:40.072 ************************************ 00:14:40.072 START TEST dm_mount 00:14:40.072 ************************************ 00:14:40.072 09:55:53 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:14:40.072 09:55:53 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:14:40.072 09:55:53 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:14:40.072 09:55:53 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:14:40.072 09:55:53 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:14:40.072 09:55:53 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:14:40.072 09:55:53 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:14:40.072 09:55:53 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:14:40.072 09:55:53 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:14:40.072 09:55:53 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:14:40.072 09:55:53 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:14:40.072 09:55:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:14:40.072 09:55:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:14:40.072 09:55:53 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:14:40.072 09:55:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:14:40.072 09:55:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:14:40.072 09:55:53 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:14:40.072 09:55:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:14:40.072 09:55:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:14:40.072 09:55:53 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:14:40.072 09:55:53 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:14:40.072 09:55:53 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:14:41.009 Creating new GPT entries in memory. 00:14:41.009 GPT data structures destroyed! You may now partition the disk using fdisk or 00:14:41.009 other utilities. 00:14:41.009 09:55:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:14:41.009 09:55:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:14:41.009 09:55:54 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:14:41.009 09:55:54 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:14:41.009 09:55:54 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:14:41.945 Creating new GPT entries in memory. 00:14:41.945 The operation has completed successfully. 00:14:41.945 09:55:55 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:14:41.945 09:55:55 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:14:41.945 09:55:55 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:14:41.945 09:55:55 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:14:41.945 09:55:55 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:14:43.321 The operation has completed successfully. 00:14:43.321 09:55:56 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:14:43.321 09:55:56 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:14:43.321 09:55:56 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 59386 00:14:43.321 09:55:56 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:14:43.321 09:55:56 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:14:43.321 09:55:56 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:14:43.321 09:55:56 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:14:43.321 09:55:56 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:14:43.321 09:55:56 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:14:43.321 09:55:56 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:14:43.321 09:55:56 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:14:43.321 09:55:56 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:14:43.321 09:55:56 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:14:43.321 09:55:56 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:14:43.321 09:55:56 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:14:43.321 09:55:56 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:14:43.322 09:55:56 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:14:43.322 09:55:56 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:14:43.322 09:55:56 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:14:43.322 09:55:56 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:14:43.322 09:55:56 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:14:43.322 09:55:56 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:14:43.322 09:55:56 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:14:43.322 09:55:56 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:14:43.322 09:55:56 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:14:43.322 09:55:56 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:14:43.322 09:55:56 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:14:43.322 09:55:56 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:14:43.322 09:55:56 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:14:43.322 09:55:56 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:14:43.322 09:55:56 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:14:43.322 09:55:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:43.322 09:55:56 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:14:43.322 09:55:56 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:14:43.322 09:55:56 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:14:43.322 09:55:56 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:14:43.581 09:55:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:43.581 09:55:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:14:43.581 09:55:56 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:14:43.581 09:55:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:43.581 09:55:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:43.581 09:55:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:43.581 09:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:43.581 09:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:43.840 09:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:43.840 09:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:43.840 09:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:14:43.840 09:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:14:43.840 09:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:14:43.840 09:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:14:43.840 09:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:14:43.840 09:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:14:43.840 09:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:14:43.840 09:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:14:43.840 09:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:14:43.840 09:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:14:43.840 09:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:14:43.840 09:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:14:43.840 09:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:14:43.840 09:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:14:43.840 09:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:43.840 09:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:14:43.840 09:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:14:43.840 09:55:57 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:14:43.840 09:55:57 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:14:44.100 09:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:44.100 09:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:14:44.100 09:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:14:44.100 09:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:44.100 09:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:44.100 09:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:44.359 09:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:44.359 09:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:44.359 09:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:44.359 09:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:44.619 09:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:14:44.619 09:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:14:44.619 09:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:14:44.619 09:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:14:44.619 09:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:14:44.619 09:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:14:44.619 09:55:57 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:14:44.619 09:55:58 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:14:44.619 09:55:58 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:14:44.619 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:14:44.619 09:55:58 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:14:44.619 09:55:58 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:14:44.619 00:14:44.619 real 0m4.585s 00:14:44.619 user 0m0.568s 00:14:44.619 sys 0m0.980s 00:14:44.619 09:55:58 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:44.619 09:55:58 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:14:44.619 ************************************ 00:14:44.619 END TEST dm_mount 00:14:44.619 ************************************ 00:14:44.619 09:55:58 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:14:44.619 09:55:58 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:14:44.619 09:55:58 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:14:44.619 09:55:58 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:44.619 09:55:58 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:14:44.619 09:55:58 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:14:44.619 09:55:58 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:14:44.619 09:55:58 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:14:44.878 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:14:44.878 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:14:44.878 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:14:44.878 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:14:44.878 09:55:58 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:14:44.878 09:55:58 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:14:44.879 09:55:58 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:14:44.879 09:55:58 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:14:44.879 09:55:58 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:14:44.879 09:55:58 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:14:44.879 09:55:58 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:14:44.879 ************************************ 00:14:44.879 END TEST devices 00:14:44.879 ************************************ 00:14:44.879 00:14:44.879 real 0m10.803s 00:14:44.879 user 0m2.117s 00:14:44.879 sys 0m3.140s 00:14:44.879 09:55:58 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:44.879 09:55:58 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:14:44.879 09:55:58 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:14:44.879 00:14:44.879 real 0m25.047s 00:14:44.879 user 0m7.922s 00:14:44.879 sys 0m11.955s 00:14:44.879 09:55:58 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:44.879 09:55:58 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:14:44.879 ************************************ 00:14:44.879 END TEST setup.sh 00:14:44.879 ************************************ 00:14:45.139 09:55:58 -- common/autotest_common.sh@1142 -- # return 0 00:14:45.139 09:55:58 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:14:45.708 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:45.708 Hugepages 00:14:45.708 node hugesize free / total 00:14:45.708 node0 1048576kB 0 / 0 00:14:45.708 node0 2048kB 2048 / 2048 00:14:45.708 00:14:45.708 Type BDF Vendor Device NUMA Driver Device Block devices 00:14:45.969 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:14:45.969 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:14:45.969 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:14:46.229 09:55:59 -- spdk/autotest.sh@130 -- # uname -s 00:14:46.229 09:55:59 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:14:46.229 09:55:59 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:14:46.229 09:55:59 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:46.798 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:47.058 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:47.058 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:47.058 09:56:00 -- common/autotest_common.sh@1532 -- # sleep 1 00:14:48.042 09:56:01 -- common/autotest_common.sh@1533 -- # bdfs=() 00:14:48.042 09:56:01 -- common/autotest_common.sh@1533 -- # local bdfs 00:14:48.042 09:56:01 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:14:48.042 09:56:01 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:14:48.042 09:56:01 -- common/autotest_common.sh@1513 -- # bdfs=() 00:14:48.042 09:56:01 -- common/autotest_common.sh@1513 -- # local bdfs 00:14:48.043 09:56:01 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:14:48.043 09:56:01 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:48.043 09:56:01 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:14:48.043 09:56:01 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:14:48.043 09:56:01 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:14:48.043 09:56:01 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:48.608 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:48.608 Waiting for block devices as requested 00:14:48.608 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:14:48.865 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:14:48.865 09:56:02 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:14:48.865 09:56:02 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:14:48.865 09:56:02 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:14:48.865 09:56:02 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:14:48.865 09:56:02 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:14:48.865 09:56:02 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:14:48.865 09:56:02 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:14:48.865 09:56:02 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:14:48.865 09:56:02 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:14:48.865 09:56:02 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:14:48.865 09:56:02 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:14:48.865 09:56:02 -- common/autotest_common.sh@1545 -- # grep oacs 00:14:48.865 09:56:02 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:14:48.865 09:56:02 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:14:48.865 09:56:02 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:14:48.865 09:56:02 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:14:48.865 09:56:02 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:14:48.865 09:56:02 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:14:48.865 09:56:02 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:14:48.865 09:56:02 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:14:48.865 09:56:02 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:14:48.865 09:56:02 -- common/autotest_common.sh@1557 -- # continue 00:14:48.865 09:56:02 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:14:48.865 09:56:02 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:14:48.865 09:56:02 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:14:48.865 09:56:02 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:14:48.865 09:56:02 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:14:48.865 09:56:02 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:14:48.865 09:56:02 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:14:48.865 09:56:02 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:14:48.865 09:56:02 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:14:48.865 09:56:02 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:14:48.865 09:56:02 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:14:48.865 09:56:02 -- common/autotest_common.sh@1545 -- # grep oacs 00:14:48.865 09:56:02 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:14:48.865 09:56:02 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:14:48.865 09:56:02 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:14:48.865 09:56:02 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:14:48.865 09:56:02 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:14:48.865 09:56:02 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:14:48.865 09:56:02 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:14:48.865 09:56:02 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:14:48.865 09:56:02 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:14:48.865 09:56:02 -- common/autotest_common.sh@1557 -- # continue 00:14:48.865 09:56:02 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:14:48.865 09:56:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:48.865 09:56:02 -- common/autotest_common.sh@10 -- # set +x 00:14:48.865 09:56:02 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:14:48.865 09:56:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:48.865 09:56:02 -- common/autotest_common.sh@10 -- # set +x 00:14:48.865 09:56:02 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:49.798 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:49.798 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:49.798 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:50.057 09:56:03 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:14:50.057 09:56:03 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:50.057 09:56:03 -- common/autotest_common.sh@10 -- # set +x 00:14:50.057 09:56:03 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:14:50.057 09:56:03 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:14:50.057 09:56:03 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:14:50.057 09:56:03 -- common/autotest_common.sh@1577 -- # bdfs=() 00:14:50.057 09:56:03 -- common/autotest_common.sh@1577 -- # local bdfs 00:14:50.057 09:56:03 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:14:50.057 09:56:03 -- common/autotest_common.sh@1513 -- # bdfs=() 00:14:50.057 09:56:03 -- common/autotest_common.sh@1513 -- # local bdfs 00:14:50.057 09:56:03 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:14:50.057 09:56:03 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:50.057 09:56:03 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:14:50.057 09:56:03 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:14:50.057 09:56:03 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:14:50.057 09:56:03 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:14:50.057 09:56:03 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:14:50.057 09:56:03 -- common/autotest_common.sh@1580 -- # device=0x0010 00:14:50.057 09:56:03 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:14:50.057 09:56:03 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:14:50.057 09:56:03 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:14:50.057 09:56:03 -- common/autotest_common.sh@1580 -- # device=0x0010 00:14:50.057 09:56:03 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:14:50.057 09:56:03 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:14:50.057 09:56:03 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:14:50.057 09:56:03 -- common/autotest_common.sh@1593 -- # return 0 00:14:50.057 09:56:03 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:14:50.057 09:56:03 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:14:50.057 09:56:03 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:14:50.057 09:56:03 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:14:50.057 09:56:03 -- spdk/autotest.sh@162 -- # timing_enter lib 00:14:50.057 09:56:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:50.057 09:56:03 -- common/autotest_common.sh@10 -- # set +x 00:14:50.057 09:56:03 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:14:50.057 09:56:03 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:14:50.057 09:56:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:50.057 09:56:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:50.057 09:56:03 -- common/autotest_common.sh@10 -- # set +x 00:14:50.057 ************************************ 00:14:50.057 START TEST env 00:14:50.057 ************************************ 00:14:50.057 09:56:03 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:14:50.315 * Looking for test storage... 00:14:50.315 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:14:50.315 09:56:03 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:14:50.315 09:56:03 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:50.315 09:56:03 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:50.315 09:56:03 env -- common/autotest_common.sh@10 -- # set +x 00:14:50.315 ************************************ 00:14:50.315 START TEST env_memory 00:14:50.315 ************************************ 00:14:50.315 09:56:03 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:14:50.315 00:14:50.315 00:14:50.315 CUnit - A unit testing framework for C - Version 2.1-3 00:14:50.315 http://cunit.sourceforge.net/ 00:14:50.315 00:14:50.315 00:14:50.315 Suite: memory 00:14:50.315 Test: alloc and free memory map ...[2024-07-15 09:56:03.762595] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:14:50.315 passed 00:14:50.315 Test: mem map translation ...[2024-07-15 09:56:03.782991] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:14:50.315 [2024-07-15 09:56:03.783023] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:14:50.315 [2024-07-15 09:56:03.783078] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:14:50.315 [2024-07-15 09:56:03.783084] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:14:50.315 passed 00:14:50.315 Test: mem map registration ...[2024-07-15 09:56:03.821014] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:14:50.315 [2024-07-15 09:56:03.821041] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:14:50.315 passed 00:14:50.315 Test: mem map adjacent registrations ...passed 00:14:50.315 00:14:50.315 Run Summary: Type Total Ran Passed Failed Inactive 00:14:50.315 suites 1 1 n/a 0 0 00:14:50.315 tests 4 4 4 0 0 00:14:50.315 asserts 152 152 152 0 n/a 00:14:50.315 00:14:50.315 Elapsed time = 0.139 seconds 00:14:50.315 00:14:50.315 real 0m0.161s 00:14:50.315 user 0m0.143s 00:14:50.315 sys 0m0.015s 00:14:50.315 09:56:03 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:50.315 09:56:03 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:14:50.315 ************************************ 00:14:50.315 END TEST env_memory 00:14:50.315 ************************************ 00:14:50.574 09:56:03 env -- common/autotest_common.sh@1142 -- # return 0 00:14:50.574 09:56:03 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:14:50.574 09:56:03 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:50.574 09:56:03 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:50.574 09:56:03 env -- common/autotest_common.sh@10 -- # set +x 00:14:50.574 ************************************ 00:14:50.574 START TEST env_vtophys 00:14:50.574 ************************************ 00:14:50.574 09:56:03 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:14:50.574 EAL: lib.eal log level changed from notice to debug 00:14:50.574 EAL: Detected lcore 0 as core 0 on socket 0 00:14:50.574 EAL: Detected lcore 1 as core 0 on socket 0 00:14:50.574 EAL: Detected lcore 2 as core 0 on socket 0 00:14:50.574 EAL: Detected lcore 3 as core 0 on socket 0 00:14:50.574 EAL: Detected lcore 4 as core 0 on socket 0 00:14:50.574 EAL: Detected lcore 5 as core 0 on socket 0 00:14:50.574 EAL: Detected lcore 6 as core 0 on socket 0 00:14:50.574 EAL: Detected lcore 7 as core 0 on socket 0 00:14:50.574 EAL: Detected lcore 8 as core 0 on socket 0 00:14:50.574 EAL: Detected lcore 9 as core 0 on socket 0 00:14:50.574 EAL: Maximum logical cores by configuration: 128 00:14:50.574 EAL: Detected CPU lcores: 10 00:14:50.574 EAL: Detected NUMA nodes: 1 00:14:50.574 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:14:50.574 EAL: Detected shared linkage of DPDK 00:14:50.574 EAL: No shared files mode enabled, IPC will be disabled 00:14:50.574 EAL: Selected IOVA mode 'PA' 00:14:50.574 EAL: Probing VFIO support... 00:14:50.574 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:14:50.574 EAL: VFIO modules not loaded, skipping VFIO support... 00:14:50.574 EAL: Ask a virtual area of 0x2e000 bytes 00:14:50.574 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:14:50.574 EAL: Setting up physically contiguous memory... 00:14:50.574 EAL: Setting maximum number of open files to 524288 00:14:50.574 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:14:50.574 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:14:50.574 EAL: Ask a virtual area of 0x61000 bytes 00:14:50.574 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:14:50.574 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:14:50.574 EAL: Ask a virtual area of 0x400000000 bytes 00:14:50.574 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:14:50.574 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:14:50.574 EAL: Ask a virtual area of 0x61000 bytes 00:14:50.574 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:14:50.574 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:14:50.574 EAL: Ask a virtual area of 0x400000000 bytes 00:14:50.574 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:14:50.574 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:14:50.574 EAL: Ask a virtual area of 0x61000 bytes 00:14:50.574 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:14:50.574 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:14:50.574 EAL: Ask a virtual area of 0x400000000 bytes 00:14:50.574 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:14:50.574 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:14:50.574 EAL: Ask a virtual area of 0x61000 bytes 00:14:50.574 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:14:50.574 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:14:50.574 EAL: Ask a virtual area of 0x400000000 bytes 00:14:50.574 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:14:50.574 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:14:50.574 EAL: Hugepages will be freed exactly as allocated. 00:14:50.574 EAL: No shared files mode enabled, IPC is disabled 00:14:50.574 EAL: No shared files mode enabled, IPC is disabled 00:14:50.574 EAL: TSC frequency is ~2290000 KHz 00:14:50.574 EAL: Main lcore 0 is ready (tid=7fbb9c60ea00;cpuset=[0]) 00:14:50.574 EAL: Trying to obtain current memory policy. 00:14:50.574 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:50.574 EAL: Restoring previous memory policy: 0 00:14:50.574 EAL: request: mp_malloc_sync 00:14:50.574 EAL: No shared files mode enabled, IPC is disabled 00:14:50.574 EAL: Heap on socket 0 was expanded by 2MB 00:14:50.574 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:14:50.574 EAL: No PCI address specified using 'addr=' in: bus=pci 00:14:50.574 EAL: Mem event callback 'spdk:(nil)' registered 00:14:50.574 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:14:50.574 00:14:50.574 00:14:50.574 CUnit - A unit testing framework for C - Version 2.1-3 00:14:50.574 http://cunit.sourceforge.net/ 00:14:50.574 00:14:50.574 00:14:50.574 Suite: components_suite 00:14:50.574 Test: vtophys_malloc_test ...passed 00:14:50.574 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:14:50.574 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:50.574 EAL: Restoring previous memory policy: 4 00:14:50.574 EAL: Calling mem event callback 'spdk:(nil)' 00:14:50.574 EAL: request: mp_malloc_sync 00:14:50.574 EAL: No shared files mode enabled, IPC is disabled 00:14:50.574 EAL: Heap on socket 0 was expanded by 4MB 00:14:50.574 EAL: Calling mem event callback 'spdk:(nil)' 00:14:50.574 EAL: request: mp_malloc_sync 00:14:50.574 EAL: No shared files mode enabled, IPC is disabled 00:14:50.574 EAL: Heap on socket 0 was shrunk by 4MB 00:14:50.574 EAL: Trying to obtain current memory policy. 00:14:50.574 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:50.574 EAL: Restoring previous memory policy: 4 00:14:50.574 EAL: Calling mem event callback 'spdk:(nil)' 00:14:50.574 EAL: request: mp_malloc_sync 00:14:50.574 EAL: No shared files mode enabled, IPC is disabled 00:14:50.574 EAL: Heap on socket 0 was expanded by 6MB 00:14:50.574 EAL: Calling mem event callback 'spdk:(nil)' 00:14:50.574 EAL: request: mp_malloc_sync 00:14:50.574 EAL: No shared files mode enabled, IPC is disabled 00:14:50.574 EAL: Heap on socket 0 was shrunk by 6MB 00:14:50.574 EAL: Trying to obtain current memory policy. 00:14:50.574 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:50.574 EAL: Restoring previous memory policy: 4 00:14:50.574 EAL: Calling mem event callback 'spdk:(nil)' 00:14:50.575 EAL: request: mp_malloc_sync 00:14:50.575 EAL: No shared files mode enabled, IPC is disabled 00:14:50.575 EAL: Heap on socket 0 was expanded by 10MB 00:14:50.575 EAL: Calling mem event callback 'spdk:(nil)' 00:14:50.575 EAL: request: mp_malloc_sync 00:14:50.575 EAL: No shared files mode enabled, IPC is disabled 00:14:50.575 EAL: Heap on socket 0 was shrunk by 10MB 00:14:50.575 EAL: Trying to obtain current memory policy. 00:14:50.575 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:50.575 EAL: Restoring previous memory policy: 4 00:14:50.575 EAL: Calling mem event callback 'spdk:(nil)' 00:14:50.575 EAL: request: mp_malloc_sync 00:14:50.575 EAL: No shared files mode enabled, IPC is disabled 00:14:50.575 EAL: Heap on socket 0 was expanded by 18MB 00:14:50.575 EAL: Calling mem event callback 'spdk:(nil)' 00:14:50.575 EAL: request: mp_malloc_sync 00:14:50.575 EAL: No shared files mode enabled, IPC is disabled 00:14:50.575 EAL: Heap on socket 0 was shrunk by 18MB 00:14:50.575 EAL: Trying to obtain current memory policy. 00:14:50.575 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:50.575 EAL: Restoring previous memory policy: 4 00:14:50.575 EAL: Calling mem event callback 'spdk:(nil)' 00:14:50.575 EAL: request: mp_malloc_sync 00:14:50.575 EAL: No shared files mode enabled, IPC is disabled 00:14:50.575 EAL: Heap on socket 0 was expanded by 34MB 00:14:50.575 EAL: Calling mem event callback 'spdk:(nil)' 00:14:50.575 EAL: request: mp_malloc_sync 00:14:50.575 EAL: No shared files mode enabled, IPC is disabled 00:14:50.575 EAL: Heap on socket 0 was shrunk by 34MB 00:14:50.575 EAL: Trying to obtain current memory policy. 00:14:50.575 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:50.575 EAL: Restoring previous memory policy: 4 00:14:50.575 EAL: Calling mem event callback 'spdk:(nil)' 00:14:50.575 EAL: request: mp_malloc_sync 00:14:50.575 EAL: No shared files mode enabled, IPC is disabled 00:14:50.575 EAL: Heap on socket 0 was expanded by 66MB 00:14:50.575 EAL: Calling mem event callback 'spdk:(nil)' 00:14:50.832 EAL: request: mp_malloc_sync 00:14:50.832 EAL: No shared files mode enabled, IPC is disabled 00:14:50.832 EAL: Heap on socket 0 was shrunk by 66MB 00:14:50.832 EAL: Trying to obtain current memory policy. 00:14:50.832 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:50.832 EAL: Restoring previous memory policy: 4 00:14:50.832 EAL: Calling mem event callback 'spdk:(nil)' 00:14:50.832 EAL: request: mp_malloc_sync 00:14:50.832 EAL: No shared files mode enabled, IPC is disabled 00:14:50.832 EAL: Heap on socket 0 was expanded by 130MB 00:14:50.832 EAL: Calling mem event callback 'spdk:(nil)' 00:14:50.832 EAL: request: mp_malloc_sync 00:14:50.833 EAL: No shared files mode enabled, IPC is disabled 00:14:50.833 EAL: Heap on socket 0 was shrunk by 130MB 00:14:50.833 EAL: Trying to obtain current memory policy. 00:14:50.833 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:50.833 EAL: Restoring previous memory policy: 4 00:14:50.833 EAL: Calling mem event callback 'spdk:(nil)' 00:14:50.833 EAL: request: mp_malloc_sync 00:14:50.833 EAL: No shared files mode enabled, IPC is disabled 00:14:50.833 EAL: Heap on socket 0 was expanded by 258MB 00:14:50.833 EAL: Calling mem event callback 'spdk:(nil)' 00:14:50.833 EAL: request: mp_malloc_sync 00:14:50.833 EAL: No shared files mode enabled, IPC is disabled 00:14:50.833 EAL: Heap on socket 0 was shrunk by 258MB 00:14:50.833 EAL: Trying to obtain current memory policy. 00:14:50.833 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:51.091 EAL: Restoring previous memory policy: 4 00:14:51.091 EAL: Calling mem event callback 'spdk:(nil)' 00:14:51.091 EAL: request: mp_malloc_sync 00:14:51.091 EAL: No shared files mode enabled, IPC is disabled 00:14:51.091 EAL: Heap on socket 0 was expanded by 514MB 00:14:51.091 EAL: Calling mem event callback 'spdk:(nil)' 00:14:51.091 EAL: request: mp_malloc_sync 00:14:51.091 EAL: No shared files mode enabled, IPC is disabled 00:14:51.091 EAL: Heap on socket 0 was shrunk by 514MB 00:14:51.091 EAL: Trying to obtain current memory policy. 00:14:51.091 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:51.349 EAL: Restoring previous memory policy: 4 00:14:51.349 EAL: Calling mem event callback 'spdk:(nil)' 00:14:51.349 EAL: request: mp_malloc_sync 00:14:51.349 EAL: No shared files mode enabled, IPC is disabled 00:14:51.349 EAL: Heap on socket 0 was expanded by 1026MB 00:14:51.607 EAL: Calling mem event callback 'spdk:(nil)' 00:14:51.607 passed 00:14:51.607 00:14:51.607 Run Summary: Type Total Ran Passed Failed Inactive 00:14:51.607 suites 1 1 n/a 0 0 00:14:51.607 tests 2 2 2 0 0 00:14:51.607 asserts 5283 5283 5283 0 n/a 00:14:51.607 00:14:51.607 Elapsed time = 0.975 seconds 00:14:51.607 EAL: request: mp_malloc_sync 00:14:51.607 EAL: No shared files mode enabled, IPC is disabled 00:14:51.607 EAL: Heap on socket 0 was shrunk by 1026MB 00:14:51.607 EAL: Calling mem event callback 'spdk:(nil)' 00:14:51.607 EAL: request: mp_malloc_sync 00:14:51.607 EAL: No shared files mode enabled, IPC is disabled 00:14:51.607 EAL: Heap on socket 0 was shrunk by 2MB 00:14:51.607 EAL: No shared files mode enabled, IPC is disabled 00:14:51.607 EAL: No shared files mode enabled, IPC is disabled 00:14:51.607 EAL: No shared files mode enabled, IPC is disabled 00:14:51.607 00:14:51.607 real 0m1.168s 00:14:51.607 user 0m0.633s 00:14:51.607 sys 0m0.412s 00:14:51.607 09:56:05 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:51.607 09:56:05 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:14:51.607 ************************************ 00:14:51.607 END TEST env_vtophys 00:14:51.607 ************************************ 00:14:51.607 09:56:05 env -- common/autotest_common.sh@1142 -- # return 0 00:14:51.607 09:56:05 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:14:51.607 09:56:05 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:51.607 09:56:05 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:51.607 09:56:05 env -- common/autotest_common.sh@10 -- # set +x 00:14:51.607 ************************************ 00:14:51.607 START TEST env_pci 00:14:51.607 ************************************ 00:14:51.607 09:56:05 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:14:51.607 00:14:51.607 00:14:51.607 CUnit - A unit testing framework for C - Version 2.1-3 00:14:51.607 http://cunit.sourceforge.net/ 00:14:51.607 00:14:51.607 00:14:51.607 Suite: pci 00:14:51.607 Test: pci_hook ...[2024-07-15 09:56:05.184050] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 60590 has claimed it 00:14:51.607 passed 00:14:51.607 00:14:51.607 Run Summary: Type Total Ran Passed Failed Inactive 00:14:51.607 suites 1 1 n/a 0 0 00:14:51.607 tests 1 1 1 0 0 00:14:51.607 asserts 25 25 25 0 n/a 00:14:51.607 00:14:51.607 Elapsed time = 0.002 seconds 00:14:51.607 EAL: Cannot find device (10000:00:01.0) 00:14:51.607 EAL: Failed to attach device on primary process 00:14:51.927 00:14:51.927 real 0m0.024s 00:14:51.927 user 0m0.008s 00:14:51.927 sys 0m0.015s 00:14:51.927 09:56:05 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:51.927 09:56:05 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:14:51.927 ************************************ 00:14:51.927 END TEST env_pci 00:14:51.927 ************************************ 00:14:51.927 09:56:05 env -- common/autotest_common.sh@1142 -- # return 0 00:14:51.927 09:56:05 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:14:51.927 09:56:05 env -- env/env.sh@15 -- # uname 00:14:51.927 09:56:05 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:14:51.927 09:56:05 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:14:51.927 09:56:05 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:14:51.927 09:56:05 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:51.927 09:56:05 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:51.927 09:56:05 env -- common/autotest_common.sh@10 -- # set +x 00:14:51.927 ************************************ 00:14:51.927 START TEST env_dpdk_post_init 00:14:51.927 ************************************ 00:14:51.927 09:56:05 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:14:51.927 EAL: Detected CPU lcores: 10 00:14:51.927 EAL: Detected NUMA nodes: 1 00:14:51.927 EAL: Detected shared linkage of DPDK 00:14:51.927 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:14:51.927 EAL: Selected IOVA mode 'PA' 00:14:51.927 TELEMETRY: No legacy callbacks, legacy socket not created 00:14:51.927 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:14:51.927 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:14:51.927 Starting DPDK initialization... 00:14:51.927 Starting SPDK post initialization... 00:14:51.927 SPDK NVMe probe 00:14:51.927 Attaching to 0000:00:10.0 00:14:51.927 Attaching to 0000:00:11.0 00:14:51.927 Attached to 0000:00:10.0 00:14:51.927 Attached to 0000:00:11.0 00:14:51.927 Cleaning up... 00:14:51.927 00:14:51.927 real 0m0.190s 00:14:51.927 user 0m0.053s 00:14:51.927 sys 0m0.038s 00:14:51.927 09:56:05 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:51.927 09:56:05 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:14:51.927 ************************************ 00:14:51.928 END TEST env_dpdk_post_init 00:14:51.928 ************************************ 00:14:52.209 09:56:05 env -- common/autotest_common.sh@1142 -- # return 0 00:14:52.209 09:56:05 env -- env/env.sh@26 -- # uname 00:14:52.209 09:56:05 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:14:52.209 09:56:05 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:14:52.209 09:56:05 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:52.209 09:56:05 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:52.209 09:56:05 env -- common/autotest_common.sh@10 -- # set +x 00:14:52.209 ************************************ 00:14:52.209 START TEST env_mem_callbacks 00:14:52.209 ************************************ 00:14:52.209 09:56:05 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:14:52.209 EAL: Detected CPU lcores: 10 00:14:52.209 EAL: Detected NUMA nodes: 1 00:14:52.209 EAL: Detected shared linkage of DPDK 00:14:52.209 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:14:52.209 EAL: Selected IOVA mode 'PA' 00:14:52.209 00:14:52.209 00:14:52.209 CUnit - A unit testing framework for C - Version 2.1-3 00:14:52.209 http://cunit.sourceforge.net/ 00:14:52.209 00:14:52.209 00:14:52.209 Suite: memory 00:14:52.209 Test: test ... 00:14:52.209 register 0x200000200000 2097152 00:14:52.209 malloc 3145728 00:14:52.209 TELEMETRY: No legacy callbacks, legacy socket not created 00:14:52.209 register 0x200000400000 4194304 00:14:52.209 buf 0x200000500000 len 3145728 PASSED 00:14:52.209 malloc 64 00:14:52.209 buf 0x2000004fff40 len 64 PASSED 00:14:52.210 malloc 4194304 00:14:52.210 register 0x200000800000 6291456 00:14:52.210 buf 0x200000a00000 len 4194304 PASSED 00:14:52.210 free 0x200000500000 3145728 00:14:52.210 free 0x2000004fff40 64 00:14:52.210 unregister 0x200000400000 4194304 PASSED 00:14:52.210 free 0x200000a00000 4194304 00:14:52.210 unregister 0x200000800000 6291456 PASSED 00:14:52.210 malloc 8388608 00:14:52.210 register 0x200000400000 10485760 00:14:52.210 buf 0x200000600000 len 8388608 PASSED 00:14:52.210 free 0x200000600000 8388608 00:14:52.210 unregister 0x200000400000 10485760 PASSED 00:14:52.210 passed 00:14:52.210 00:14:52.210 Run Summary: Type Total Ran Passed Failed Inactive 00:14:52.210 suites 1 1 n/a 0 0 00:14:52.210 tests 1 1 1 0 0 00:14:52.210 asserts 15 15 15 0 n/a 00:14:52.210 00:14:52.210 Elapsed time = 0.006 seconds 00:14:52.210 00:14:52.210 real 0m0.146s 00:14:52.210 user 0m0.018s 00:14:52.210 sys 0m0.026s 00:14:52.210 09:56:05 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:52.210 09:56:05 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:14:52.210 ************************************ 00:14:52.210 END TEST env_mem_callbacks 00:14:52.210 ************************************ 00:14:52.210 09:56:05 env -- common/autotest_common.sh@1142 -- # return 0 00:14:52.210 00:14:52.210 real 0m2.129s 00:14:52.210 user 0m1.010s 00:14:52.210 sys 0m0.807s 00:14:52.210 09:56:05 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:52.210 09:56:05 env -- common/autotest_common.sh@10 -- # set +x 00:14:52.210 ************************************ 00:14:52.210 END TEST env 00:14:52.210 ************************************ 00:14:52.210 09:56:05 -- common/autotest_common.sh@1142 -- # return 0 00:14:52.210 09:56:05 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:14:52.210 09:56:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:52.210 09:56:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:52.210 09:56:05 -- common/autotest_common.sh@10 -- # set +x 00:14:52.210 ************************************ 00:14:52.210 START TEST rpc 00:14:52.210 ************************************ 00:14:52.210 09:56:05 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:14:52.469 * Looking for test storage... 00:14:52.469 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:14:52.469 09:56:05 rpc -- rpc/rpc.sh@65 -- # spdk_pid=60694 00:14:52.469 09:56:05 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:14:52.469 09:56:05 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:14:52.469 09:56:05 rpc -- rpc/rpc.sh@67 -- # waitforlisten 60694 00:14:52.469 09:56:05 rpc -- common/autotest_common.sh@829 -- # '[' -z 60694 ']' 00:14:52.469 09:56:05 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:52.469 09:56:05 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:52.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:52.469 09:56:05 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:52.469 09:56:05 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:52.469 09:56:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:14:52.469 [2024-07-15 09:56:05.953879] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:52.469 [2024-07-15 09:56:05.953952] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60694 ] 00:14:52.728 [2024-07-15 09:56:06.091392] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.728 [2024-07-15 09:56:06.197106] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:14:52.728 [2024-07-15 09:56:06.197152] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 60694' to capture a snapshot of events at runtime. 00:14:52.728 [2024-07-15 09:56:06.197159] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:52.728 [2024-07-15 09:56:06.197163] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:52.729 [2024-07-15 09:56:06.197167] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid60694 for offline analysis/debug. 00:14:52.729 [2024-07-15 09:56:06.197191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.296 09:56:06 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:53.296 09:56:06 rpc -- common/autotest_common.sh@862 -- # return 0 00:14:53.296 09:56:06 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:14:53.296 09:56:06 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:14:53.296 09:56:06 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:14:53.296 09:56:06 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:14:53.296 09:56:06 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:53.296 09:56:06 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:53.296 09:56:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:14:53.296 ************************************ 00:14:53.296 START TEST rpc_integrity 00:14:53.296 ************************************ 00:14:53.296 09:56:06 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:14:53.296 09:56:06 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:53.296 09:56:06 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.296 09:56:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:53.296 09:56:06 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.296 09:56:06 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:14:53.296 09:56:06 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:14:53.554 09:56:06 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:14:53.554 09:56:06 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:14:53.554 09:56:06 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.554 09:56:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:53.554 09:56:06 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.554 09:56:06 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:14:53.554 09:56:06 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:14:53.554 09:56:06 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.554 09:56:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:53.554 09:56:06 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.554 09:56:06 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:14:53.554 { 00:14:53.554 "aliases": [ 00:14:53.554 "37621846-70a7-44c1-be65-e84970e0a6b1" 00:14:53.554 ], 00:14:53.554 "assigned_rate_limits": { 00:14:53.554 "r_mbytes_per_sec": 0, 00:14:53.554 "rw_ios_per_sec": 0, 00:14:53.554 "rw_mbytes_per_sec": 0, 00:14:53.554 "w_mbytes_per_sec": 0 00:14:53.554 }, 00:14:53.554 "block_size": 512, 00:14:53.554 "claimed": false, 00:14:53.554 "driver_specific": {}, 00:14:53.554 "memory_domains": [ 00:14:53.554 { 00:14:53.554 "dma_device_id": "system", 00:14:53.554 "dma_device_type": 1 00:14:53.554 }, 00:14:53.554 { 00:14:53.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.554 "dma_device_type": 2 00:14:53.554 } 00:14:53.554 ], 00:14:53.554 "name": "Malloc0", 00:14:53.554 "num_blocks": 16384, 00:14:53.554 "product_name": "Malloc disk", 00:14:53.554 "supported_io_types": { 00:14:53.554 "abort": true, 00:14:53.554 "compare": false, 00:14:53.554 "compare_and_write": false, 00:14:53.554 "copy": true, 00:14:53.554 "flush": true, 00:14:53.554 "get_zone_info": false, 00:14:53.554 "nvme_admin": false, 00:14:53.554 "nvme_io": false, 00:14:53.554 "nvme_io_md": false, 00:14:53.554 "nvme_iov_md": false, 00:14:53.554 "read": true, 00:14:53.554 "reset": true, 00:14:53.554 "seek_data": false, 00:14:53.554 "seek_hole": false, 00:14:53.554 "unmap": true, 00:14:53.554 "write": true, 00:14:53.554 "write_zeroes": true, 00:14:53.554 "zcopy": true, 00:14:53.554 "zone_append": false, 00:14:53.554 "zone_management": false 00:14:53.554 }, 00:14:53.554 "uuid": "37621846-70a7-44c1-be65-e84970e0a6b1", 00:14:53.554 "zoned": false 00:14:53.554 } 00:14:53.554 ]' 00:14:53.554 09:56:06 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:14:53.554 09:56:06 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:14:53.554 09:56:06 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:14:53.554 09:56:06 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.554 09:56:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:53.554 [2024-07-15 09:56:06.988218] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:14:53.554 [2024-07-15 09:56:06.988266] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:53.554 [2024-07-15 09:56:06.988280] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ba0ad0 00:14:53.554 [2024-07-15 09:56:06.988287] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:53.554 [2024-07-15 09:56:06.989854] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:53.554 [2024-07-15 09:56:06.989887] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:14:53.554 Passthru0 00:14:53.554 09:56:06 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.554 09:56:06 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:14:53.554 09:56:06 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.554 09:56:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:53.554 09:56:07 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.554 09:56:07 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:14:53.554 { 00:14:53.555 "aliases": [ 00:14:53.555 "37621846-70a7-44c1-be65-e84970e0a6b1" 00:14:53.555 ], 00:14:53.555 "assigned_rate_limits": { 00:14:53.555 "r_mbytes_per_sec": 0, 00:14:53.555 "rw_ios_per_sec": 0, 00:14:53.555 "rw_mbytes_per_sec": 0, 00:14:53.555 "w_mbytes_per_sec": 0 00:14:53.555 }, 00:14:53.555 "block_size": 512, 00:14:53.555 "claim_type": "exclusive_write", 00:14:53.555 "claimed": true, 00:14:53.555 "driver_specific": {}, 00:14:53.555 "memory_domains": [ 00:14:53.555 { 00:14:53.555 "dma_device_id": "system", 00:14:53.555 "dma_device_type": 1 00:14:53.555 }, 00:14:53.555 { 00:14:53.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.555 "dma_device_type": 2 00:14:53.555 } 00:14:53.555 ], 00:14:53.555 "name": "Malloc0", 00:14:53.555 "num_blocks": 16384, 00:14:53.555 "product_name": "Malloc disk", 00:14:53.555 "supported_io_types": { 00:14:53.555 "abort": true, 00:14:53.555 "compare": false, 00:14:53.555 "compare_and_write": false, 00:14:53.555 "copy": true, 00:14:53.555 "flush": true, 00:14:53.555 "get_zone_info": false, 00:14:53.555 "nvme_admin": false, 00:14:53.555 "nvme_io": false, 00:14:53.555 "nvme_io_md": false, 00:14:53.555 "nvme_iov_md": false, 00:14:53.555 "read": true, 00:14:53.555 "reset": true, 00:14:53.555 "seek_data": false, 00:14:53.555 "seek_hole": false, 00:14:53.555 "unmap": true, 00:14:53.555 "write": true, 00:14:53.555 "write_zeroes": true, 00:14:53.555 "zcopy": true, 00:14:53.555 "zone_append": false, 00:14:53.555 "zone_management": false 00:14:53.555 }, 00:14:53.555 "uuid": "37621846-70a7-44c1-be65-e84970e0a6b1", 00:14:53.555 "zoned": false 00:14:53.555 }, 00:14:53.555 { 00:14:53.555 "aliases": [ 00:14:53.555 "5ef2daca-7af6-564a-9a7f-b17d4cee9345" 00:14:53.555 ], 00:14:53.555 "assigned_rate_limits": { 00:14:53.555 "r_mbytes_per_sec": 0, 00:14:53.555 "rw_ios_per_sec": 0, 00:14:53.555 "rw_mbytes_per_sec": 0, 00:14:53.555 "w_mbytes_per_sec": 0 00:14:53.555 }, 00:14:53.555 "block_size": 512, 00:14:53.555 "claimed": false, 00:14:53.555 "driver_specific": { 00:14:53.555 "passthru": { 00:14:53.555 "base_bdev_name": "Malloc0", 00:14:53.555 "name": "Passthru0" 00:14:53.555 } 00:14:53.555 }, 00:14:53.555 "memory_domains": [ 00:14:53.555 { 00:14:53.555 "dma_device_id": "system", 00:14:53.555 "dma_device_type": 1 00:14:53.555 }, 00:14:53.555 { 00:14:53.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.555 "dma_device_type": 2 00:14:53.555 } 00:14:53.555 ], 00:14:53.555 "name": "Passthru0", 00:14:53.555 "num_blocks": 16384, 00:14:53.555 "product_name": "passthru", 00:14:53.555 "supported_io_types": { 00:14:53.555 "abort": true, 00:14:53.555 "compare": false, 00:14:53.555 "compare_and_write": false, 00:14:53.555 "copy": true, 00:14:53.555 "flush": true, 00:14:53.555 "get_zone_info": false, 00:14:53.555 "nvme_admin": false, 00:14:53.555 "nvme_io": false, 00:14:53.555 "nvme_io_md": false, 00:14:53.555 "nvme_iov_md": false, 00:14:53.555 "read": true, 00:14:53.555 "reset": true, 00:14:53.555 "seek_data": false, 00:14:53.555 "seek_hole": false, 00:14:53.555 "unmap": true, 00:14:53.555 "write": true, 00:14:53.555 "write_zeroes": true, 00:14:53.555 "zcopy": true, 00:14:53.555 "zone_append": false, 00:14:53.555 "zone_management": false 00:14:53.555 }, 00:14:53.555 "uuid": "5ef2daca-7af6-564a-9a7f-b17d4cee9345", 00:14:53.555 "zoned": false 00:14:53.555 } 00:14:53.555 ]' 00:14:53.555 09:56:07 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:14:53.555 09:56:07 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:14:53.555 09:56:07 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:14:53.555 09:56:07 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.555 09:56:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:53.555 09:56:07 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.555 09:56:07 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:53.555 09:56:07 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.555 09:56:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:53.555 09:56:07 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.555 09:56:07 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:53.555 09:56:07 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.555 09:56:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:53.555 09:56:07 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.555 09:56:07 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:14:53.555 09:56:07 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:14:53.815 09:56:07 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:14:53.815 00:14:53.815 real 0m0.322s 00:14:53.815 user 0m0.194s 00:14:53.815 sys 0m0.049s 00:14:53.815 09:56:07 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:53.815 09:56:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:53.815 ************************************ 00:14:53.815 END TEST rpc_integrity 00:14:53.815 ************************************ 00:14:53.815 09:56:07 rpc -- common/autotest_common.sh@1142 -- # return 0 00:14:53.815 09:56:07 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:14:53.815 09:56:07 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:53.815 09:56:07 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:53.815 09:56:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:14:53.815 ************************************ 00:14:53.815 START TEST rpc_plugins 00:14:53.815 ************************************ 00:14:53.815 09:56:07 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:14:53.815 09:56:07 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:14:53.815 09:56:07 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.815 09:56:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:14:53.815 09:56:07 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.815 09:56:07 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:14:53.815 09:56:07 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:14:53.815 09:56:07 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.815 09:56:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:14:53.815 09:56:07 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.815 09:56:07 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:14:53.815 { 00:14:53.815 "aliases": [ 00:14:53.815 "12da48d4-8648-45e8-9283-e4aebdffe344" 00:14:53.815 ], 00:14:53.815 "assigned_rate_limits": { 00:14:53.815 "r_mbytes_per_sec": 0, 00:14:53.815 "rw_ios_per_sec": 0, 00:14:53.815 "rw_mbytes_per_sec": 0, 00:14:53.815 "w_mbytes_per_sec": 0 00:14:53.815 }, 00:14:53.815 "block_size": 4096, 00:14:53.815 "claimed": false, 00:14:53.815 "driver_specific": {}, 00:14:53.815 "memory_domains": [ 00:14:53.815 { 00:14:53.815 "dma_device_id": "system", 00:14:53.815 "dma_device_type": 1 00:14:53.815 }, 00:14:53.815 { 00:14:53.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.815 "dma_device_type": 2 00:14:53.815 } 00:14:53.815 ], 00:14:53.815 "name": "Malloc1", 00:14:53.815 "num_blocks": 256, 00:14:53.815 "product_name": "Malloc disk", 00:14:53.815 "supported_io_types": { 00:14:53.815 "abort": true, 00:14:53.815 "compare": false, 00:14:53.815 "compare_and_write": false, 00:14:53.815 "copy": true, 00:14:53.815 "flush": true, 00:14:53.815 "get_zone_info": false, 00:14:53.815 "nvme_admin": false, 00:14:53.815 "nvme_io": false, 00:14:53.815 "nvme_io_md": false, 00:14:53.815 "nvme_iov_md": false, 00:14:53.815 "read": true, 00:14:53.815 "reset": true, 00:14:53.815 "seek_data": false, 00:14:53.815 "seek_hole": false, 00:14:53.815 "unmap": true, 00:14:53.815 "write": true, 00:14:53.815 "write_zeroes": true, 00:14:53.815 "zcopy": true, 00:14:53.815 "zone_append": false, 00:14:53.815 "zone_management": false 00:14:53.815 }, 00:14:53.815 "uuid": "12da48d4-8648-45e8-9283-e4aebdffe344", 00:14:53.815 "zoned": false 00:14:53.815 } 00:14:53.815 ]' 00:14:53.815 09:56:07 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:14:53.815 09:56:07 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:14:53.815 09:56:07 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:14:53.815 09:56:07 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.815 09:56:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:14:53.815 09:56:07 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.815 09:56:07 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:14:53.815 09:56:07 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.815 09:56:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:14:53.815 09:56:07 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.815 09:56:07 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:14:53.815 09:56:07 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:14:53.815 09:56:07 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:14:53.815 00:14:53.816 real 0m0.148s 00:14:53.816 user 0m0.094s 00:14:53.816 sys 0m0.016s 00:14:53.816 09:56:07 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:53.816 09:56:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:14:53.816 ************************************ 00:14:53.816 END TEST rpc_plugins 00:14:53.816 ************************************ 00:14:54.076 09:56:07 rpc -- common/autotest_common.sh@1142 -- # return 0 00:14:54.076 09:56:07 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:14:54.076 09:56:07 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:54.076 09:56:07 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:54.076 09:56:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:14:54.076 ************************************ 00:14:54.076 START TEST rpc_trace_cmd_test 00:14:54.076 ************************************ 00:14:54.076 09:56:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:14:54.076 09:56:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:14:54.076 09:56:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:14:54.076 09:56:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.076 09:56:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.076 09:56:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.076 09:56:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:14:54.076 "bdev": { 00:14:54.076 "mask": "0x8", 00:14:54.076 "tpoint_mask": "0xffffffffffffffff" 00:14:54.076 }, 00:14:54.076 "bdev_nvme": { 00:14:54.076 "mask": "0x4000", 00:14:54.076 "tpoint_mask": "0x0" 00:14:54.076 }, 00:14:54.076 "blobfs": { 00:14:54.076 "mask": "0x80", 00:14:54.076 "tpoint_mask": "0x0" 00:14:54.076 }, 00:14:54.076 "dsa": { 00:14:54.076 "mask": "0x200", 00:14:54.076 "tpoint_mask": "0x0" 00:14:54.076 }, 00:14:54.076 "ftl": { 00:14:54.076 "mask": "0x40", 00:14:54.076 "tpoint_mask": "0x0" 00:14:54.076 }, 00:14:54.076 "iaa": { 00:14:54.076 "mask": "0x1000", 00:14:54.076 "tpoint_mask": "0x0" 00:14:54.076 }, 00:14:54.076 "iscsi_conn": { 00:14:54.076 "mask": "0x2", 00:14:54.076 "tpoint_mask": "0x0" 00:14:54.076 }, 00:14:54.076 "nvme_pcie": { 00:14:54.076 "mask": "0x800", 00:14:54.076 "tpoint_mask": "0x0" 00:14:54.076 }, 00:14:54.076 "nvme_tcp": { 00:14:54.076 "mask": "0x2000", 00:14:54.076 "tpoint_mask": "0x0" 00:14:54.076 }, 00:14:54.076 "nvmf_rdma": { 00:14:54.076 "mask": "0x10", 00:14:54.076 "tpoint_mask": "0x0" 00:14:54.076 }, 00:14:54.076 "nvmf_tcp": { 00:14:54.076 "mask": "0x20", 00:14:54.076 "tpoint_mask": "0x0" 00:14:54.076 }, 00:14:54.076 "scsi": { 00:14:54.076 "mask": "0x4", 00:14:54.076 "tpoint_mask": "0x0" 00:14:54.076 }, 00:14:54.076 "sock": { 00:14:54.076 "mask": "0x8000", 00:14:54.076 "tpoint_mask": "0x0" 00:14:54.076 }, 00:14:54.076 "thread": { 00:14:54.076 "mask": "0x400", 00:14:54.076 "tpoint_mask": "0x0" 00:14:54.076 }, 00:14:54.076 "tpoint_group_mask": "0x8", 00:14:54.076 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid60694" 00:14:54.076 }' 00:14:54.076 09:56:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:14:54.076 09:56:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:14:54.076 09:56:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:14:54.076 09:56:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:14:54.076 09:56:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:14:54.076 09:56:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:14:54.076 09:56:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:14:54.076 09:56:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:14:54.076 09:56:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:14:54.076 09:56:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:14:54.076 00:14:54.076 real 0m0.214s 00:14:54.076 user 0m0.174s 00:14:54.076 sys 0m0.031s 00:14:54.076 09:56:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:54.076 09:56:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.076 ************************************ 00:14:54.076 END TEST rpc_trace_cmd_test 00:14:54.076 ************************************ 00:14:54.337 09:56:07 rpc -- common/autotest_common.sh@1142 -- # return 0 00:14:54.337 09:56:07 rpc -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:14:54.337 09:56:07 rpc -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:14:54.337 09:56:07 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:54.337 09:56:07 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:54.337 09:56:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:14:54.337 ************************************ 00:14:54.337 START TEST go_rpc 00:14:54.337 ************************************ 00:14:54.337 09:56:07 rpc.go_rpc -- common/autotest_common.sh@1123 -- # go_rpc 00:14:54.337 09:56:07 rpc.go_rpc -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:14:54.337 09:56:07 rpc.go_rpc -- rpc/rpc.sh@51 -- # bdevs='[]' 00:14:54.337 09:56:07 rpc.go_rpc -- rpc/rpc.sh@52 -- # jq length 00:14:54.337 09:56:07 rpc.go_rpc -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:14:54.337 09:56:07 rpc.go_rpc -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:14:54.337 09:56:07 rpc.go_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.337 09:56:07 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:54.337 09:56:07 rpc.go_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.337 09:56:07 rpc.go_rpc -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:14:54.337 09:56:07 rpc.go_rpc -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:14:54.337 09:56:07 rpc.go_rpc -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["1b3a0f46-3463-4bad-888d-226323cb620a"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"copy":true,"flush":true,"get_zone_info":false,"nvme_admin":false,"nvme_io":false,"nvme_io_md":false,"nvme_iov_md":false,"read":true,"reset":true,"seek_data":false,"seek_hole":false,"unmap":true,"write":true,"write_zeroes":true,"zcopy":true,"zone_append":false,"zone_management":false},"uuid":"1b3a0f46-3463-4bad-888d-226323cb620a","zoned":false}]' 00:14:54.337 09:56:07 rpc.go_rpc -- rpc/rpc.sh@57 -- # jq length 00:14:54.337 09:56:07 rpc.go_rpc -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:14:54.337 09:56:07 rpc.go_rpc -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:14:54.337 09:56:07 rpc.go_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.337 09:56:07 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:54.337 09:56:07 rpc.go_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.337 09:56:07 rpc.go_rpc -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:14:54.337 09:56:07 rpc.go_rpc -- rpc/rpc.sh@60 -- # bdevs='[]' 00:14:54.337 09:56:07 rpc.go_rpc -- rpc/rpc.sh@61 -- # jq length 00:14:54.337 09:56:07 rpc.go_rpc -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:14:54.337 00:14:54.337 real 0m0.219s 00:14:54.337 user 0m0.142s 00:14:54.337 sys 0m0.043s 00:14:54.337 09:56:07 rpc.go_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:54.337 09:56:07 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:54.337 ************************************ 00:14:54.337 END TEST go_rpc 00:14:54.337 ************************************ 00:14:54.597 09:56:07 rpc -- common/autotest_common.sh@1142 -- # return 0 00:14:54.597 09:56:07 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:14:54.597 09:56:07 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:14:54.597 09:56:07 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:54.597 09:56:07 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:54.597 09:56:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:14:54.597 ************************************ 00:14:54.597 START TEST rpc_daemon_integrity 00:14:54.597 ************************************ 00:14:54.597 09:56:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:14:54.597 09:56:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:54.597 09:56:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.597 09:56:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:54.597 09:56:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.597 09:56:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:14:54.597 09:56:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:14:54.597 09:56:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:14:54.597 09:56:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:14:54.597 09:56:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.597 09:56:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:54.597 09:56:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.597 09:56:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:14:54.597 09:56:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:14:54.597 09:56:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.597 09:56:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:54.597 09:56:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.597 09:56:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:14:54.597 { 00:14:54.597 "aliases": [ 00:14:54.597 "5beadbf9-523f-4a83-b3ce-e7b930d1e489" 00:14:54.597 ], 00:14:54.597 "assigned_rate_limits": { 00:14:54.597 "r_mbytes_per_sec": 0, 00:14:54.597 "rw_ios_per_sec": 0, 00:14:54.597 "rw_mbytes_per_sec": 0, 00:14:54.597 "w_mbytes_per_sec": 0 00:14:54.597 }, 00:14:54.597 "block_size": 512, 00:14:54.597 "claimed": false, 00:14:54.597 "driver_specific": {}, 00:14:54.597 "memory_domains": [ 00:14:54.597 { 00:14:54.597 "dma_device_id": "system", 00:14:54.597 "dma_device_type": 1 00:14:54.597 }, 00:14:54.597 { 00:14:54.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.597 "dma_device_type": 2 00:14:54.597 } 00:14:54.597 ], 00:14:54.597 "name": "Malloc3", 00:14:54.597 "num_blocks": 16384, 00:14:54.597 "product_name": "Malloc disk", 00:14:54.597 "supported_io_types": { 00:14:54.597 "abort": true, 00:14:54.597 "compare": false, 00:14:54.597 "compare_and_write": false, 00:14:54.597 "copy": true, 00:14:54.597 "flush": true, 00:14:54.597 "get_zone_info": false, 00:14:54.597 "nvme_admin": false, 00:14:54.597 "nvme_io": false, 00:14:54.597 "nvme_io_md": false, 00:14:54.597 "nvme_iov_md": false, 00:14:54.597 "read": true, 00:14:54.597 "reset": true, 00:14:54.597 "seek_data": false, 00:14:54.597 "seek_hole": false, 00:14:54.597 "unmap": true, 00:14:54.597 "write": true, 00:14:54.597 "write_zeroes": true, 00:14:54.597 "zcopy": true, 00:14:54.597 "zone_append": false, 00:14:54.597 "zone_management": false 00:14:54.597 }, 00:14:54.597 "uuid": "5beadbf9-523f-4a83-b3ce-e7b930d1e489", 00:14:54.597 "zoned": false 00:14:54.597 } 00:14:54.597 ]' 00:14:54.597 09:56:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:14:54.597 09:56:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:14:54.597 09:56:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:14:54.597 09:56:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.597 09:56:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:54.597 [2024-07-15 09:56:08.098744] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:14:54.597 [2024-07-15 09:56:08.098789] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:54.597 [2024-07-15 09:56:08.098803] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d97d70 00:14:54.597 [2024-07-15 09:56:08.098810] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:54.597 [2024-07-15 09:56:08.100180] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:54.597 [2024-07-15 09:56:08.100209] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:14:54.597 Passthru0 00:14:54.597 09:56:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.597 09:56:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:14:54.597 09:56:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.597 09:56:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:54.597 09:56:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.597 09:56:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:14:54.597 { 00:14:54.597 "aliases": [ 00:14:54.597 "5beadbf9-523f-4a83-b3ce-e7b930d1e489" 00:14:54.597 ], 00:14:54.597 "assigned_rate_limits": { 00:14:54.597 "r_mbytes_per_sec": 0, 00:14:54.597 "rw_ios_per_sec": 0, 00:14:54.597 "rw_mbytes_per_sec": 0, 00:14:54.597 "w_mbytes_per_sec": 0 00:14:54.597 }, 00:14:54.597 "block_size": 512, 00:14:54.597 "claim_type": "exclusive_write", 00:14:54.597 "claimed": true, 00:14:54.597 "driver_specific": {}, 00:14:54.597 "memory_domains": [ 00:14:54.597 { 00:14:54.597 "dma_device_id": "system", 00:14:54.597 "dma_device_type": 1 00:14:54.597 }, 00:14:54.598 { 00:14:54.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.598 "dma_device_type": 2 00:14:54.598 } 00:14:54.598 ], 00:14:54.598 "name": "Malloc3", 00:14:54.598 "num_blocks": 16384, 00:14:54.598 "product_name": "Malloc disk", 00:14:54.598 "supported_io_types": { 00:14:54.598 "abort": true, 00:14:54.598 "compare": false, 00:14:54.598 "compare_and_write": false, 00:14:54.598 "copy": true, 00:14:54.598 "flush": true, 00:14:54.598 "get_zone_info": false, 00:14:54.598 "nvme_admin": false, 00:14:54.598 "nvme_io": false, 00:14:54.598 "nvme_io_md": false, 00:14:54.598 "nvme_iov_md": false, 00:14:54.598 "read": true, 00:14:54.598 "reset": true, 00:14:54.598 "seek_data": false, 00:14:54.598 "seek_hole": false, 00:14:54.598 "unmap": true, 00:14:54.598 "write": true, 00:14:54.598 "write_zeroes": true, 00:14:54.598 "zcopy": true, 00:14:54.598 "zone_append": false, 00:14:54.598 "zone_management": false 00:14:54.598 }, 00:14:54.598 "uuid": "5beadbf9-523f-4a83-b3ce-e7b930d1e489", 00:14:54.598 "zoned": false 00:14:54.598 }, 00:14:54.598 { 00:14:54.598 "aliases": [ 00:14:54.598 "e5b2b9e0-8f06-5465-b842-f96021470158" 00:14:54.598 ], 00:14:54.598 "assigned_rate_limits": { 00:14:54.598 "r_mbytes_per_sec": 0, 00:14:54.598 "rw_ios_per_sec": 0, 00:14:54.598 "rw_mbytes_per_sec": 0, 00:14:54.598 "w_mbytes_per_sec": 0 00:14:54.598 }, 00:14:54.598 "block_size": 512, 00:14:54.598 "claimed": false, 00:14:54.598 "driver_specific": { 00:14:54.598 "passthru": { 00:14:54.598 "base_bdev_name": "Malloc3", 00:14:54.598 "name": "Passthru0" 00:14:54.598 } 00:14:54.598 }, 00:14:54.598 "memory_domains": [ 00:14:54.598 { 00:14:54.598 "dma_device_id": "system", 00:14:54.598 "dma_device_type": 1 00:14:54.598 }, 00:14:54.598 { 00:14:54.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.598 "dma_device_type": 2 00:14:54.598 } 00:14:54.598 ], 00:14:54.598 "name": "Passthru0", 00:14:54.598 "num_blocks": 16384, 00:14:54.598 "product_name": "passthru", 00:14:54.598 "supported_io_types": { 00:14:54.598 "abort": true, 00:14:54.598 "compare": false, 00:14:54.598 "compare_and_write": false, 00:14:54.598 "copy": true, 00:14:54.598 "flush": true, 00:14:54.598 "get_zone_info": false, 00:14:54.598 "nvme_admin": false, 00:14:54.598 "nvme_io": false, 00:14:54.598 "nvme_io_md": false, 00:14:54.598 "nvme_iov_md": false, 00:14:54.598 "read": true, 00:14:54.598 "reset": true, 00:14:54.598 "seek_data": false, 00:14:54.598 "seek_hole": false, 00:14:54.598 "unmap": true, 00:14:54.598 "write": true, 00:14:54.598 "write_zeroes": true, 00:14:54.598 "zcopy": true, 00:14:54.598 "zone_append": false, 00:14:54.598 "zone_management": false 00:14:54.598 }, 00:14:54.598 "uuid": "e5b2b9e0-8f06-5465-b842-f96021470158", 00:14:54.598 "zoned": false 00:14:54.598 } 00:14:54.598 ]' 00:14:54.598 09:56:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:14:54.857 09:56:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:14:54.857 09:56:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:14:54.857 09:56:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.857 09:56:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:54.857 09:56:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.857 09:56:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:14:54.857 09:56:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.857 09:56:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:54.857 09:56:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.857 09:56:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:54.857 09:56:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.857 09:56:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:54.857 09:56:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.857 09:56:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:14:54.857 09:56:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:14:54.857 09:56:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:14:54.857 00:14:54.857 real 0m0.303s 00:14:54.857 user 0m0.181s 00:14:54.857 sys 0m0.051s 00:14:54.857 09:56:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:54.857 09:56:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:14:54.857 ************************************ 00:14:54.857 END TEST rpc_daemon_integrity 00:14:54.857 ************************************ 00:14:54.857 09:56:08 rpc -- common/autotest_common.sh@1142 -- # return 0 00:14:54.857 09:56:08 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:14:54.857 09:56:08 rpc -- rpc/rpc.sh@84 -- # killprocess 60694 00:14:54.857 09:56:08 rpc -- common/autotest_common.sh@948 -- # '[' -z 60694 ']' 00:14:54.857 09:56:08 rpc -- common/autotest_common.sh@952 -- # kill -0 60694 00:14:54.857 09:56:08 rpc -- common/autotest_common.sh@953 -- # uname 00:14:54.857 09:56:08 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:54.857 09:56:08 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60694 00:14:54.857 09:56:08 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:54.857 09:56:08 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:54.857 killing process with pid 60694 00:14:54.858 09:56:08 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60694' 00:14:54.858 09:56:08 rpc -- common/autotest_common.sh@967 -- # kill 60694 00:14:54.858 09:56:08 rpc -- common/autotest_common.sh@972 -- # wait 60694 00:14:55.117 00:14:55.117 real 0m2.885s 00:14:55.117 user 0m3.712s 00:14:55.117 sys 0m0.784s 00:14:55.117 09:56:08 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:55.117 09:56:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:14:55.117 ************************************ 00:14:55.117 END TEST rpc 00:14:55.117 ************************************ 00:14:55.377 09:56:08 -- common/autotest_common.sh@1142 -- # return 0 00:14:55.377 09:56:08 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:14:55.377 09:56:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:55.377 09:56:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:55.377 09:56:08 -- common/autotest_common.sh@10 -- # set +x 00:14:55.377 ************************************ 00:14:55.377 START TEST skip_rpc 00:14:55.377 ************************************ 00:14:55.377 09:56:08 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:14:55.377 * Looking for test storage... 00:14:55.377 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:14:55.377 09:56:08 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:14:55.377 09:56:08 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:14:55.377 09:56:08 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:14:55.377 09:56:08 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:55.377 09:56:08 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:55.377 09:56:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:55.377 ************************************ 00:14:55.377 START TEST skip_rpc 00:14:55.377 ************************************ 00:14:55.377 09:56:08 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:14:55.377 09:56:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=60956 00:14:55.377 09:56:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:14:55.377 09:56:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:14:55.377 09:56:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:14:55.377 [2024-07-15 09:56:08.904835] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:55.377 [2024-07-15 09:56:08.904904] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60956 ] 00:14:55.637 [2024-07-15 09:56:09.041992] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.637 [2024-07-15 09:56:09.146319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.931 09:56:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:15:00.931 09:56:13 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:15:00.931 09:56:13 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:15:00.931 09:56:13 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:15:00.931 09:56:13 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:00.931 09:56:13 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:15:00.931 09:56:13 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:00.931 09:56:13 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:15:00.931 09:56:13 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.931 09:56:13 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.931 2024/07/15 09:56:13 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:15:00.931 09:56:13 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:00.931 09:56:13 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:15:00.931 09:56:13 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:00.931 09:56:13 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:00.931 09:56:13 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:00.931 09:56:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:15:00.931 09:56:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 60956 00:15:00.931 09:56:13 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 60956 ']' 00:15:00.931 09:56:13 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 60956 00:15:00.931 09:56:13 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:15:00.931 09:56:13 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:00.931 09:56:13 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60956 00:15:00.931 09:56:13 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:00.931 09:56:13 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:00.931 killing process with pid 60956 00:15:00.931 09:56:13 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60956' 00:15:00.931 09:56:13 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 60956 00:15:00.931 09:56:13 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 60956 00:15:00.931 00:15:00.931 real 0m5.375s 00:15:00.931 user 0m5.063s 00:15:00.931 sys 0m0.232s 00:15:00.931 09:56:14 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:00.931 09:56:14 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.931 ************************************ 00:15:00.931 END TEST skip_rpc 00:15:00.931 ************************************ 00:15:00.931 09:56:14 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:15:00.931 09:56:14 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:15:00.931 09:56:14 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:00.931 09:56:14 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:00.931 09:56:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.931 ************************************ 00:15:00.931 START TEST skip_rpc_with_json 00:15:00.931 ************************************ 00:15:00.931 09:56:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:15:00.931 09:56:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:15:00.931 09:56:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=61049 00:15:00.931 09:56:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:15:00.931 09:56:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:15:00.931 09:56:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 61049 00:15:00.931 09:56:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 61049 ']' 00:15:00.931 09:56:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:00.931 09:56:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:00.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:00.931 09:56:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:00.931 09:56:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:00.931 09:56:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:15:00.931 [2024-07-15 09:56:14.339654] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:00.931 [2024-07-15 09:56:14.339756] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61049 ] 00:15:00.931 [2024-07-15 09:56:14.475370] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.227 [2024-07-15 09:56:14.582171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:01.795 09:56:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:01.795 09:56:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:15:01.795 09:56:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:15:01.795 09:56:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.795 09:56:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:15:01.795 [2024-07-15 09:56:15.197532] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:15:01.795 2024/07/15 09:56:15 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:15:01.795 request: 00:15:01.795 { 00:15:01.795 "method": "nvmf_get_transports", 00:15:01.795 "params": { 00:15:01.795 "trtype": "tcp" 00:15:01.795 } 00:15:01.795 } 00:15:01.795 Got JSON-RPC error response 00:15:01.795 GoRPCClient: error on JSON-RPC call 00:15:01.795 09:56:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:01.795 09:56:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:15:01.795 09:56:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.795 09:56:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:15:01.795 [2024-07-15 09:56:15.209645] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:01.795 09:56:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.795 09:56:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:15:01.795 09:56:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.795 09:56:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:15:01.795 09:56:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.795 09:56:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:15:01.795 { 00:15:01.795 "subsystems": [ 00:15:01.795 { 00:15:01.795 "subsystem": "keyring", 00:15:01.795 "config": [] 00:15:01.795 }, 00:15:01.795 { 00:15:01.795 "subsystem": "iobuf", 00:15:01.795 "config": [ 00:15:01.795 { 00:15:01.795 "method": "iobuf_set_options", 00:15:01.795 "params": { 00:15:01.795 "large_bufsize": 135168, 00:15:01.795 "large_pool_count": 1024, 00:15:01.795 "small_bufsize": 8192, 00:15:01.795 "small_pool_count": 8192 00:15:01.795 } 00:15:01.795 } 00:15:01.795 ] 00:15:01.795 }, 00:15:01.795 { 00:15:01.795 "subsystem": "sock", 00:15:01.795 "config": [ 00:15:01.795 { 00:15:01.795 "method": "sock_set_default_impl", 00:15:01.795 "params": { 00:15:01.795 "impl_name": "posix" 00:15:01.795 } 00:15:01.795 }, 00:15:01.795 { 00:15:01.795 "method": "sock_impl_set_options", 00:15:01.795 "params": { 00:15:01.795 "enable_ktls": false, 00:15:01.795 "enable_placement_id": 0, 00:15:01.795 "enable_quickack": false, 00:15:01.795 "enable_recv_pipe": true, 00:15:01.795 "enable_zerocopy_send_client": false, 00:15:01.795 "enable_zerocopy_send_server": true, 00:15:01.795 "impl_name": "ssl", 00:15:01.795 "recv_buf_size": 4096, 00:15:01.795 "send_buf_size": 4096, 00:15:01.795 "tls_version": 0, 00:15:01.795 "zerocopy_threshold": 0 00:15:01.795 } 00:15:01.795 }, 00:15:01.795 { 00:15:01.795 "method": "sock_impl_set_options", 00:15:01.795 "params": { 00:15:01.795 "enable_ktls": false, 00:15:01.795 "enable_placement_id": 0, 00:15:01.795 "enable_quickack": false, 00:15:01.795 "enable_recv_pipe": true, 00:15:01.795 "enable_zerocopy_send_client": false, 00:15:01.795 "enable_zerocopy_send_server": true, 00:15:01.795 "impl_name": "posix", 00:15:01.795 "recv_buf_size": 2097152, 00:15:01.795 "send_buf_size": 2097152, 00:15:01.795 "tls_version": 0, 00:15:01.795 "zerocopy_threshold": 0 00:15:01.795 } 00:15:01.795 } 00:15:01.795 ] 00:15:01.795 }, 00:15:01.795 { 00:15:01.795 "subsystem": "vmd", 00:15:01.795 "config": [] 00:15:01.795 }, 00:15:01.795 { 00:15:01.795 "subsystem": "accel", 00:15:01.795 "config": [ 00:15:01.795 { 00:15:01.795 "method": "accel_set_options", 00:15:01.795 "params": { 00:15:01.795 "buf_count": 2048, 00:15:01.795 "large_cache_size": 16, 00:15:01.795 "sequence_count": 2048, 00:15:01.795 "small_cache_size": 128, 00:15:01.795 "task_count": 2048 00:15:01.795 } 00:15:01.795 } 00:15:01.795 ] 00:15:01.795 }, 00:15:01.795 { 00:15:01.795 "subsystem": "bdev", 00:15:01.795 "config": [ 00:15:01.795 { 00:15:01.795 "method": "bdev_set_options", 00:15:01.795 "params": { 00:15:01.795 "bdev_auto_examine": true, 00:15:01.795 "bdev_io_cache_size": 256, 00:15:01.795 "bdev_io_pool_size": 65535, 00:15:01.795 "iobuf_large_cache_size": 16, 00:15:01.795 "iobuf_small_cache_size": 128 00:15:01.795 } 00:15:01.795 }, 00:15:01.795 { 00:15:01.795 "method": "bdev_raid_set_options", 00:15:01.795 "params": { 00:15:01.795 "process_window_size_kb": 1024 00:15:01.795 } 00:15:01.795 }, 00:15:01.795 { 00:15:01.796 "method": "bdev_iscsi_set_options", 00:15:01.796 "params": { 00:15:01.796 "timeout_sec": 30 00:15:01.796 } 00:15:01.796 }, 00:15:01.796 { 00:15:01.796 "method": "bdev_nvme_set_options", 00:15:01.796 "params": { 00:15:01.796 "action_on_timeout": "none", 00:15:01.796 "allow_accel_sequence": false, 00:15:01.796 "arbitration_burst": 0, 00:15:01.796 "bdev_retry_count": 3, 00:15:01.796 "ctrlr_loss_timeout_sec": 0, 00:15:01.796 "delay_cmd_submit": true, 00:15:01.796 "dhchap_dhgroups": [ 00:15:01.796 "null", 00:15:01.796 "ffdhe2048", 00:15:01.796 "ffdhe3072", 00:15:01.796 "ffdhe4096", 00:15:01.796 "ffdhe6144", 00:15:01.796 "ffdhe8192" 00:15:01.796 ], 00:15:01.796 "dhchap_digests": [ 00:15:01.796 "sha256", 00:15:01.796 "sha384", 00:15:01.796 "sha512" 00:15:01.796 ], 00:15:01.796 "disable_auto_failback": false, 00:15:01.796 "fast_io_fail_timeout_sec": 0, 00:15:01.796 "generate_uuids": false, 00:15:01.796 "high_priority_weight": 0, 00:15:01.796 "io_path_stat": false, 00:15:01.796 "io_queue_requests": 0, 00:15:01.796 "keep_alive_timeout_ms": 10000, 00:15:01.796 "low_priority_weight": 0, 00:15:01.796 "medium_priority_weight": 0, 00:15:01.796 "nvme_adminq_poll_period_us": 10000, 00:15:01.796 "nvme_error_stat": false, 00:15:01.796 "nvme_ioq_poll_period_us": 0, 00:15:01.796 "rdma_cm_event_timeout_ms": 0, 00:15:01.796 "rdma_max_cq_size": 0, 00:15:01.796 "rdma_srq_size": 0, 00:15:01.796 "reconnect_delay_sec": 0, 00:15:01.796 "timeout_admin_us": 0, 00:15:01.796 "timeout_us": 0, 00:15:01.796 "transport_ack_timeout": 0, 00:15:01.796 "transport_retry_count": 4, 00:15:01.796 "transport_tos": 0 00:15:01.796 } 00:15:01.796 }, 00:15:01.796 { 00:15:01.796 "method": "bdev_nvme_set_hotplug", 00:15:01.796 "params": { 00:15:01.796 "enable": false, 00:15:01.796 "period_us": 100000 00:15:01.796 } 00:15:01.796 }, 00:15:01.796 { 00:15:01.796 "method": "bdev_wait_for_examine" 00:15:01.796 } 00:15:01.796 ] 00:15:01.796 }, 00:15:01.796 { 00:15:01.796 "subsystem": "scsi", 00:15:01.796 "config": null 00:15:01.796 }, 00:15:01.796 { 00:15:01.796 "subsystem": "scheduler", 00:15:01.796 "config": [ 00:15:01.796 { 00:15:01.796 "method": "framework_set_scheduler", 00:15:01.796 "params": { 00:15:01.796 "name": "static" 00:15:01.796 } 00:15:01.796 } 00:15:01.796 ] 00:15:01.796 }, 00:15:01.796 { 00:15:01.796 "subsystem": "vhost_scsi", 00:15:01.796 "config": [] 00:15:01.796 }, 00:15:01.796 { 00:15:01.796 "subsystem": "vhost_blk", 00:15:01.796 "config": [] 00:15:01.796 }, 00:15:01.796 { 00:15:01.796 "subsystem": "ublk", 00:15:01.796 "config": [] 00:15:01.796 }, 00:15:01.796 { 00:15:01.796 "subsystem": "nbd", 00:15:01.796 "config": [] 00:15:01.796 }, 00:15:01.796 { 00:15:01.796 "subsystem": "nvmf", 00:15:01.796 "config": [ 00:15:01.796 { 00:15:01.796 "method": "nvmf_set_config", 00:15:01.796 "params": { 00:15:01.796 "admin_cmd_passthru": { 00:15:01.796 "identify_ctrlr": false 00:15:01.796 }, 00:15:01.796 "discovery_filter": "match_any" 00:15:01.796 } 00:15:01.796 }, 00:15:01.796 { 00:15:01.796 "method": "nvmf_set_max_subsystems", 00:15:01.796 "params": { 00:15:01.796 "max_subsystems": 1024 00:15:01.796 } 00:15:01.796 }, 00:15:01.796 { 00:15:01.796 "method": "nvmf_set_crdt", 00:15:01.796 "params": { 00:15:01.796 "crdt1": 0, 00:15:01.796 "crdt2": 0, 00:15:01.796 "crdt3": 0 00:15:01.796 } 00:15:01.796 }, 00:15:01.796 { 00:15:01.796 "method": "nvmf_create_transport", 00:15:01.796 "params": { 00:15:01.796 "abort_timeout_sec": 1, 00:15:01.796 "ack_timeout": 0, 00:15:01.796 "buf_cache_size": 4294967295, 00:15:01.796 "c2h_success": true, 00:15:01.796 "data_wr_pool_size": 0, 00:15:01.796 "dif_insert_or_strip": false, 00:15:01.796 "in_capsule_data_size": 4096, 00:15:01.796 "io_unit_size": 131072, 00:15:01.796 "max_aq_depth": 128, 00:15:01.796 "max_io_qpairs_per_ctrlr": 127, 00:15:01.796 "max_io_size": 131072, 00:15:01.796 "max_queue_depth": 128, 00:15:01.796 "num_shared_buffers": 511, 00:15:01.796 "sock_priority": 0, 00:15:01.796 "trtype": "TCP", 00:15:01.796 "zcopy": false 00:15:01.796 } 00:15:01.796 } 00:15:01.796 ] 00:15:01.796 }, 00:15:01.796 { 00:15:01.796 "subsystem": "iscsi", 00:15:01.796 "config": [ 00:15:01.796 { 00:15:01.796 "method": "iscsi_set_options", 00:15:01.796 "params": { 00:15:01.796 "allow_duplicated_isid": false, 00:15:01.796 "chap_group": 0, 00:15:01.796 "data_out_pool_size": 2048, 00:15:01.796 "default_time2retain": 20, 00:15:01.796 "default_time2wait": 2, 00:15:01.796 "disable_chap": false, 00:15:01.796 "error_recovery_level": 0, 00:15:01.796 "first_burst_length": 8192, 00:15:01.796 "immediate_data": true, 00:15:01.796 "immediate_data_pool_size": 16384, 00:15:01.796 "max_connections_per_session": 2, 00:15:01.796 "max_large_datain_per_connection": 64, 00:15:01.796 "max_queue_depth": 64, 00:15:01.796 "max_r2t_per_connection": 4, 00:15:01.796 "max_sessions": 128, 00:15:01.796 "mutual_chap": false, 00:15:01.796 "node_base": "iqn.2016-06.io.spdk", 00:15:01.796 "nop_in_interval": 30, 00:15:01.796 "nop_timeout": 60, 00:15:01.796 "pdu_pool_size": 36864, 00:15:01.796 "require_chap": false 00:15:01.796 } 00:15:01.796 } 00:15:01.796 ] 00:15:01.796 } 00:15:01.796 ] 00:15:01.796 } 00:15:01.796 09:56:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:15:01.796 09:56:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 61049 00:15:01.796 09:56:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 61049 ']' 00:15:01.796 09:56:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 61049 00:15:01.796 09:56:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:15:02.055 09:56:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:02.055 09:56:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61049 00:15:02.055 09:56:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:02.055 killing process with pid 61049 00:15:02.055 09:56:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:02.055 09:56:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61049' 00:15:02.055 09:56:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 61049 00:15:02.055 09:56:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 61049 00:15:02.313 09:56:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:15:02.313 09:56:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=61088 00:15:02.313 09:56:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:15:07.587 09:56:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 61088 00:15:07.587 09:56:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 61088 ']' 00:15:07.587 09:56:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 61088 00:15:07.587 09:56:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:15:07.587 09:56:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:07.587 09:56:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61088 00:15:07.587 09:56:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:07.587 09:56:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:07.587 09:56:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61088' 00:15:07.587 killing process with pid 61088 00:15:07.587 09:56:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 61088 00:15:07.587 09:56:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 61088 00:15:07.587 09:56:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:15:07.587 09:56:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:15:07.587 00:15:07.587 real 0m6.821s 00:15:07.587 user 0m6.543s 00:15:07.587 sys 0m0.551s 00:15:07.587 09:56:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:07.587 09:56:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:15:07.587 ************************************ 00:15:07.587 END TEST skip_rpc_with_json 00:15:07.587 ************************************ 00:15:07.587 09:56:21 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:15:07.587 09:56:21 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:15:07.587 09:56:21 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:07.587 09:56:21 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:07.587 09:56:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:07.587 ************************************ 00:15:07.587 START TEST skip_rpc_with_delay 00:15:07.587 ************************************ 00:15:07.587 09:56:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:15:07.587 09:56:21 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:15:07.587 09:56:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:15:07.587 09:56:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:15:07.587 09:56:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:07.587 09:56:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:07.587 09:56:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:07.845 09:56:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:07.845 09:56:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:07.845 09:56:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:07.845 09:56:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:07.845 09:56:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:15:07.845 09:56:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:15:07.845 [2024-07-15 09:56:21.231978] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:15:07.845 [2024-07-15 09:56:21.232080] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:15:07.845 09:56:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:15:07.845 09:56:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:07.845 09:56:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:07.845 09:56:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:07.845 00:15:07.845 real 0m0.083s 00:15:07.845 user 0m0.053s 00:15:07.845 sys 0m0.029s 00:15:07.845 ************************************ 00:15:07.845 END TEST skip_rpc_with_delay 00:15:07.845 ************************************ 00:15:07.845 09:56:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:07.845 09:56:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:15:07.845 09:56:21 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:15:07.845 09:56:21 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:15:07.845 09:56:21 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:15:07.845 09:56:21 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:15:07.845 09:56:21 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:07.845 09:56:21 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:07.845 09:56:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:07.845 ************************************ 00:15:07.845 START TEST exit_on_failed_rpc_init 00:15:07.845 ************************************ 00:15:07.845 09:56:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:15:07.845 09:56:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=61198 00:15:07.845 09:56:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:15:07.845 09:56:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 61198 00:15:07.845 09:56:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 61198 ']' 00:15:07.845 09:56:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:07.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:07.845 09:56:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:07.845 09:56:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:07.845 09:56:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:07.845 09:56:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:15:07.845 [2024-07-15 09:56:21.390139] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:07.845 [2024-07-15 09:56:21.390218] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61198 ] 00:15:08.103 [2024-07-15 09:56:21.529780] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.103 [2024-07-15 09:56:21.638861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.052 09:56:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:09.052 09:56:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:15:09.052 09:56:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:15:09.052 09:56:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:15:09.052 09:56:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:15:09.052 09:56:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:15:09.052 09:56:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:09.052 09:56:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:09.052 09:56:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:09.052 09:56:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:09.052 09:56:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:09.052 09:56:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:09.052 09:56:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:09.052 09:56:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:15:09.052 09:56:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:15:09.052 [2024-07-15 09:56:22.367314] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:09.052 [2024-07-15 09:56:22.367390] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61223 ] 00:15:09.052 [2024-07-15 09:56:22.505148] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.052 [2024-07-15 09:56:22.614284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:09.052 [2024-07-15 09:56:22.614375] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:15:09.052 [2024-07-15 09:56:22.614388] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:15:09.052 [2024-07-15 09:56:22.614396] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:09.312 09:56:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:15:09.312 09:56:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:09.312 09:56:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:15:09.312 09:56:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:15:09.312 09:56:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:15:09.312 09:56:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:09.312 09:56:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:15:09.312 09:56:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 61198 00:15:09.312 09:56:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 61198 ']' 00:15:09.312 09:56:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 61198 00:15:09.312 09:56:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:15:09.312 09:56:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:09.312 09:56:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61198 00:15:09.312 09:56:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:09.312 09:56:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:09.312 killing process with pid 61198 00:15:09.312 09:56:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61198' 00:15:09.312 09:56:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 61198 00:15:09.312 09:56:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 61198 00:15:09.570 00:15:09.570 real 0m1.760s 00:15:09.570 user 0m2.058s 00:15:09.570 sys 0m0.400s 00:15:09.570 09:56:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:09.570 ************************************ 00:15:09.570 END TEST exit_on_failed_rpc_init 00:15:09.570 09:56:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:15:09.570 ************************************ 00:15:09.570 09:56:23 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:15:09.570 09:56:23 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:15:09.570 00:15:09.570 real 0m14.415s 00:15:09.570 user 0m13.848s 00:15:09.570 sys 0m1.475s 00:15:09.570 09:56:23 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:09.570 09:56:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.570 ************************************ 00:15:09.570 END TEST skip_rpc 00:15:09.570 ************************************ 00:15:09.827 09:56:23 -- common/autotest_common.sh@1142 -- # return 0 00:15:09.827 09:56:23 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:15:09.827 09:56:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:09.827 09:56:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:09.827 09:56:23 -- common/autotest_common.sh@10 -- # set +x 00:15:09.827 ************************************ 00:15:09.827 START TEST rpc_client 00:15:09.827 ************************************ 00:15:09.827 09:56:23 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:15:09.827 * Looking for test storage... 00:15:09.827 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:15:09.827 09:56:23 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:15:09.827 OK 00:15:09.827 09:56:23 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:15:09.827 00:15:09.827 real 0m0.150s 00:15:09.827 user 0m0.072s 00:15:09.827 sys 0m0.087s 00:15:09.827 ************************************ 00:15:09.827 END TEST rpc_client 00:15:09.827 ************************************ 00:15:09.827 09:56:23 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:09.827 09:56:23 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:15:09.827 09:56:23 -- common/autotest_common.sh@1142 -- # return 0 00:15:09.827 09:56:23 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:15:09.827 09:56:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:09.827 09:56:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:09.827 09:56:23 -- common/autotest_common.sh@10 -- # set +x 00:15:10.085 ************************************ 00:15:10.085 START TEST json_config 00:15:10.085 ************************************ 00:15:10.085 09:56:23 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:15:10.085 09:56:23 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:10.085 09:56:23 json_config -- nvmf/common.sh@7 -- # uname -s 00:15:10.085 09:56:23 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:10.085 09:56:23 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:10.085 09:56:23 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:10.085 09:56:23 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:10.085 09:56:23 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:10.085 09:56:23 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:10.085 09:56:23 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:10.085 09:56:23 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:10.085 09:56:23 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:10.085 09:56:23 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:10.085 09:56:23 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:15:10.085 09:56:23 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:15:10.085 09:56:23 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:10.085 09:56:23 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:10.085 09:56:23 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:15:10.085 09:56:23 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:10.085 09:56:23 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:10.085 09:56:23 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:10.085 09:56:23 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:10.085 09:56:23 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:10.085 09:56:23 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.085 09:56:23 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.085 09:56:23 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.085 09:56:23 json_config -- paths/export.sh@5 -- # export PATH 00:15:10.085 09:56:23 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.085 09:56:23 json_config -- nvmf/common.sh@47 -- # : 0 00:15:10.085 09:56:23 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:10.085 09:56:23 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:10.085 09:56:23 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:10.085 09:56:23 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:10.085 09:56:23 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:10.085 09:56:23 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:10.085 09:56:23 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:10.085 09:56:23 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:10.085 09:56:23 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:15:10.085 09:56:23 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:15:10.085 09:56:23 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:15:10.085 09:56:23 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:15:10.085 09:56:23 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:15:10.085 09:56:23 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:15:10.085 09:56:23 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:15:10.085 09:56:23 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:15:10.085 09:56:23 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:15:10.085 09:56:23 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:15:10.085 09:56:23 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:15:10.085 09:56:23 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:15:10.085 09:56:23 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:15:10.085 09:56:23 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:15:10.085 09:56:23 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:15:10.085 INFO: JSON configuration test init 00:15:10.085 09:56:23 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:15:10.085 09:56:23 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:15:10.085 09:56:23 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:15:10.085 09:56:23 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:10.085 09:56:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:10.085 09:56:23 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:15:10.085 09:56:23 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:10.085 09:56:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:10.085 09:56:23 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:15:10.085 09:56:23 json_config -- json_config/common.sh@9 -- # local app=target 00:15:10.085 09:56:23 json_config -- json_config/common.sh@10 -- # shift 00:15:10.085 09:56:23 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:15:10.085 09:56:23 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:15:10.085 09:56:23 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:15:10.085 09:56:23 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:15:10.085 09:56:23 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:15:10.085 09:56:23 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=61346 00:15:10.085 Waiting for target to run... 00:15:10.085 09:56:23 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:15:10.085 09:56:23 json_config -- json_config/common.sh@25 -- # waitforlisten 61346 /var/tmp/spdk_tgt.sock 00:15:10.085 09:56:23 json_config -- common/autotest_common.sh@829 -- # '[' -z 61346 ']' 00:15:10.085 09:56:23 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:15:10.085 09:56:23 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:15:10.085 09:56:23 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:10.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:15:10.085 09:56:23 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:15:10.085 09:56:23 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:10.085 09:56:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:10.085 [2024-07-15 09:56:23.611567] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:10.085 [2024-07-15 09:56:23.611651] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61346 ] 00:15:10.652 [2024-07-15 09:56:23.975205] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.652 [2024-07-15 09:56:24.063563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:11.216 09:56:24 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:11.216 09:56:24 json_config -- common/autotest_common.sh@862 -- # return 0 00:15:11.216 00:15:11.216 09:56:24 json_config -- json_config/common.sh@26 -- # echo '' 00:15:11.216 09:56:24 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:15:11.216 09:56:24 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:15:11.216 09:56:24 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:11.217 09:56:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:11.217 09:56:24 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:15:11.217 09:56:24 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:15:11.217 09:56:24 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:11.217 09:56:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:11.217 09:56:24 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:15:11.217 09:56:24 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:15:11.217 09:56:24 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:15:11.475 09:56:25 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:15:11.475 09:56:25 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:15:11.475 09:56:25 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:11.475 09:56:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:11.475 09:56:25 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:15:11.475 09:56:25 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:15:11.475 09:56:25 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:15:11.733 09:56:25 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:15:11.733 09:56:25 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:15:11.733 09:56:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:15:11.733 09:56:25 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:15:11.733 09:56:25 json_config -- json_config/json_config.sh@48 -- # local get_types 00:15:11.733 09:56:25 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:15:11.733 09:56:25 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:15:11.733 09:56:25 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:11.733 09:56:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:12.080 09:56:25 json_config -- json_config/json_config.sh@55 -- # return 0 00:15:12.080 09:56:25 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:15:12.080 09:56:25 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:15:12.080 09:56:25 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:15:12.080 09:56:25 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:15:12.080 09:56:25 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:15:12.080 09:56:25 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:15:12.080 09:56:25 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:12.080 09:56:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:12.080 09:56:25 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:15:12.080 09:56:25 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:15:12.080 09:56:25 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:15:12.080 09:56:25 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:15:12.080 09:56:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:15:12.080 MallocForNvmf0 00:15:12.080 09:56:25 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:15:12.080 09:56:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:15:12.339 MallocForNvmf1 00:15:12.339 09:56:25 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:15:12.339 09:56:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:15:12.598 [2024-07-15 09:56:26.024703] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:12.598 09:56:26 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:12.598 09:56:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:12.857 09:56:26 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:15:12.857 09:56:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:15:12.857 09:56:26 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:15:12.857 09:56:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:15:13.116 09:56:26 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:15:13.116 09:56:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:15:13.374 [2024-07-15 09:56:26.823514] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:15:13.374 09:56:26 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:15:13.374 09:56:26 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:13.374 09:56:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:13.374 09:56:26 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:15:13.374 09:56:26 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:13.374 09:56:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:13.374 09:56:26 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:15:13.374 09:56:26 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:15:13.374 09:56:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:15:13.632 MallocBdevForConfigChangeCheck 00:15:13.632 09:56:27 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:15:13.632 09:56:27 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:13.632 09:56:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:13.632 09:56:27 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:15:13.632 09:56:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:15:14.200 INFO: shutting down applications... 00:15:14.200 09:56:27 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:15:14.200 09:56:27 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:15:14.200 09:56:27 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:15:14.200 09:56:27 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:15:14.200 09:56:27 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:15:14.460 Calling clear_iscsi_subsystem 00:15:14.460 Calling clear_nvmf_subsystem 00:15:14.460 Calling clear_nbd_subsystem 00:15:14.460 Calling clear_ublk_subsystem 00:15:14.460 Calling clear_vhost_blk_subsystem 00:15:14.460 Calling clear_vhost_scsi_subsystem 00:15:14.460 Calling clear_bdev_subsystem 00:15:14.460 09:56:27 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:15:14.460 09:56:27 json_config -- json_config/json_config.sh@343 -- # count=100 00:15:14.460 09:56:27 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:15:14.460 09:56:27 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:15:14.460 09:56:27 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:15:14.460 09:56:27 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:15:14.719 09:56:28 json_config -- json_config/json_config.sh@345 -- # break 00:15:14.719 09:56:28 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:15:14.719 09:56:28 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:15:14.719 09:56:28 json_config -- json_config/common.sh@31 -- # local app=target 00:15:14.719 09:56:28 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:15:14.719 09:56:28 json_config -- json_config/common.sh@35 -- # [[ -n 61346 ]] 00:15:14.719 09:56:28 json_config -- json_config/common.sh@38 -- # kill -SIGINT 61346 00:15:14.719 09:56:28 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:15:14.719 09:56:28 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:15:14.719 09:56:28 json_config -- json_config/common.sh@41 -- # kill -0 61346 00:15:14.719 09:56:28 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:15:15.288 09:56:28 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:15:15.288 09:56:28 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:15:15.288 09:56:28 json_config -- json_config/common.sh@41 -- # kill -0 61346 00:15:15.288 09:56:28 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:15:15.288 09:56:28 json_config -- json_config/common.sh@43 -- # break 00:15:15.288 09:56:28 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:15:15.288 SPDK target shutdown done 00:15:15.288 09:56:28 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:15:15.288 INFO: relaunching applications... 00:15:15.288 09:56:28 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:15:15.288 09:56:28 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:15:15.288 09:56:28 json_config -- json_config/common.sh@9 -- # local app=target 00:15:15.288 09:56:28 json_config -- json_config/common.sh@10 -- # shift 00:15:15.288 09:56:28 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:15:15.288 09:56:28 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:15:15.288 09:56:28 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:15:15.288 09:56:28 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:15:15.288 09:56:28 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:15:15.288 09:56:28 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=61616 00:15:15.288 09:56:28 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:15:15.288 Waiting for target to run... 00:15:15.288 09:56:28 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:15:15.288 09:56:28 json_config -- json_config/common.sh@25 -- # waitforlisten 61616 /var/tmp/spdk_tgt.sock 00:15:15.288 09:56:28 json_config -- common/autotest_common.sh@829 -- # '[' -z 61616 ']' 00:15:15.288 09:56:28 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:15:15.288 09:56:28 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:15.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:15:15.288 09:56:28 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:15:15.288 09:56:28 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:15.288 09:56:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:15.288 [2024-07-15 09:56:28.790409] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:15.288 [2024-07-15 09:56:28.790480] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61616 ] 00:15:15.903 [2024-07-15 09:56:29.142902] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.903 [2024-07-15 09:56:29.229619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.161 [2024-07-15 09:56:29.553096] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:16.161 [2024-07-15 09:56:29.585048] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:15:16.161 09:56:29 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:16.161 09:56:29 json_config -- common/autotest_common.sh@862 -- # return 0 00:15:16.161 00:15:16.161 09:56:29 json_config -- json_config/common.sh@26 -- # echo '' 00:15:16.161 09:56:29 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:15:16.161 INFO: Checking if target configuration is the same... 00:15:16.161 09:56:29 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:15:16.161 09:56:29 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:15:16.161 09:56:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:15:16.161 09:56:29 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:15:16.161 + '[' 2 -ne 2 ']' 00:15:16.161 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:15:16.161 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:15:16.161 + rootdir=/home/vagrant/spdk_repo/spdk 00:15:16.161 +++ basename /dev/fd/62 00:15:16.161 ++ mktemp /tmp/62.XXX 00:15:16.161 + tmp_file_1=/tmp/62.3PY 00:15:16.161 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:15:16.161 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:15:16.161 + tmp_file_2=/tmp/spdk_tgt_config.json.loh 00:15:16.161 + ret=0 00:15:16.161 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:15:16.731 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:15:16.731 + diff -u /tmp/62.3PY /tmp/spdk_tgt_config.json.loh 00:15:16.731 INFO: JSON config files are the same 00:15:16.731 + echo 'INFO: JSON config files are the same' 00:15:16.731 + rm /tmp/62.3PY /tmp/spdk_tgt_config.json.loh 00:15:16.731 + exit 0 00:15:16.731 09:56:30 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:15:16.731 INFO: changing configuration and checking if this can be detected... 00:15:16.731 09:56:30 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:15:16.731 09:56:30 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:15:16.731 09:56:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:15:16.993 09:56:30 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:15:16.993 09:56:30 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:15:16.993 09:56:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:15:16.993 + '[' 2 -ne 2 ']' 00:15:16.993 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:15:16.993 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:15:16.993 + rootdir=/home/vagrant/spdk_repo/spdk 00:15:16.993 +++ basename /dev/fd/62 00:15:16.993 ++ mktemp /tmp/62.XXX 00:15:16.993 + tmp_file_1=/tmp/62.iw7 00:15:16.993 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:15:16.993 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:15:16.993 + tmp_file_2=/tmp/spdk_tgt_config.json.HdO 00:15:16.993 + ret=0 00:15:16.993 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:15:17.251 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:15:17.251 + diff -u /tmp/62.iw7 /tmp/spdk_tgt_config.json.HdO 00:15:17.251 + ret=1 00:15:17.251 + echo '=== Start of file: /tmp/62.iw7 ===' 00:15:17.251 + cat /tmp/62.iw7 00:15:17.251 + echo '=== End of file: /tmp/62.iw7 ===' 00:15:17.251 + echo '' 00:15:17.251 + echo '=== Start of file: /tmp/spdk_tgt_config.json.HdO ===' 00:15:17.251 + cat /tmp/spdk_tgt_config.json.HdO 00:15:17.251 + echo '=== End of file: /tmp/spdk_tgt_config.json.HdO ===' 00:15:17.251 + echo '' 00:15:17.251 + rm /tmp/62.iw7 /tmp/spdk_tgt_config.json.HdO 00:15:17.251 + exit 1 00:15:17.251 INFO: configuration change detected. 00:15:17.251 09:56:30 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:15:17.251 09:56:30 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:15:17.251 09:56:30 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:15:17.251 09:56:30 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:17.251 09:56:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:17.251 09:56:30 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:15:17.251 09:56:30 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:15:17.251 09:56:30 json_config -- json_config/json_config.sh@317 -- # [[ -n 61616 ]] 00:15:17.251 09:56:30 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:15:17.251 09:56:30 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:15:17.251 09:56:30 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:17.251 09:56:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:17.251 09:56:30 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:15:17.251 09:56:30 json_config -- json_config/json_config.sh@193 -- # uname -s 00:15:17.251 09:56:30 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:15:17.251 09:56:30 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:15:17.251 09:56:30 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:15:17.251 09:56:30 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:15:17.251 09:56:30 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:17.251 09:56:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:17.251 09:56:30 json_config -- json_config/json_config.sh@323 -- # killprocess 61616 00:15:17.251 09:56:30 json_config -- common/autotest_common.sh@948 -- # '[' -z 61616 ']' 00:15:17.251 09:56:30 json_config -- common/autotest_common.sh@952 -- # kill -0 61616 00:15:17.251 09:56:30 json_config -- common/autotest_common.sh@953 -- # uname 00:15:17.251 09:56:30 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:17.251 09:56:30 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61616 00:15:17.508 09:56:30 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:17.508 09:56:30 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:17.508 killing process with pid 61616 00:15:17.508 09:56:30 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61616' 00:15:17.508 09:56:30 json_config -- common/autotest_common.sh@967 -- # kill 61616 00:15:17.508 09:56:30 json_config -- common/autotest_common.sh@972 -- # wait 61616 00:15:17.508 09:56:31 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:15:17.508 09:56:31 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:15:17.508 09:56:31 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:17.508 09:56:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:17.766 09:56:31 json_config -- json_config/json_config.sh@328 -- # return 0 00:15:17.766 INFO: Success 00:15:17.766 09:56:31 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:15:17.766 00:15:17.766 real 0m7.703s 00:15:17.766 user 0m10.668s 00:15:17.766 sys 0m1.878s 00:15:17.766 09:56:31 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:17.766 09:56:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:17.766 ************************************ 00:15:17.766 END TEST json_config 00:15:17.766 ************************************ 00:15:17.766 09:56:31 -- common/autotest_common.sh@1142 -- # return 0 00:15:17.766 09:56:31 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:15:17.766 09:56:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:17.766 09:56:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:17.766 09:56:31 -- common/autotest_common.sh@10 -- # set +x 00:15:17.766 ************************************ 00:15:17.766 START TEST json_config_extra_key 00:15:17.766 ************************************ 00:15:17.766 09:56:31 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:15:17.766 09:56:31 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:17.766 09:56:31 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:15:17.766 09:56:31 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:17.766 09:56:31 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:17.766 09:56:31 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:17.766 09:56:31 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:17.766 09:56:31 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:17.766 09:56:31 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:17.766 09:56:31 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:17.766 09:56:31 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:17.766 09:56:31 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:17.766 09:56:31 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:17.766 09:56:31 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:15:17.766 09:56:31 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:15:17.766 09:56:31 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:17.766 09:56:31 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:17.766 09:56:31 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:15:17.766 09:56:31 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:17.766 09:56:31 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:17.766 09:56:31 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:17.766 09:56:31 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:17.766 09:56:31 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:17.766 09:56:31 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.766 09:56:31 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.766 09:56:31 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.766 09:56:31 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:15:17.766 09:56:31 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.766 09:56:31 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:15:17.766 09:56:31 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:17.766 09:56:31 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:17.766 09:56:31 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:17.766 09:56:31 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:17.766 09:56:31 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:17.766 09:56:31 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:17.766 09:56:31 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:17.766 09:56:31 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:17.766 09:56:31 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:15:17.766 09:56:31 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:15:17.766 09:56:31 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:15:17.766 09:56:31 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:15:17.766 09:56:31 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:15:17.766 09:56:31 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:15:17.766 09:56:31 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:15:17.766 09:56:31 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:15:17.766 09:56:31 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:15:17.766 09:56:31 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:15:17.766 INFO: launching applications... 00:15:17.766 09:56:31 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:15:17.766 09:56:31 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:15:17.766 09:56:31 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:15:17.766 09:56:31 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:15:17.766 09:56:31 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:15:17.766 09:56:31 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:15:17.766 09:56:31 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:15:17.766 09:56:31 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:15:17.766 09:56:31 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:15:17.766 09:56:31 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=61781 00:15:17.766 09:56:31 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:15:17.766 Waiting for target to run... 00:15:17.766 09:56:31 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:15:17.766 09:56:31 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 61781 /var/tmp/spdk_tgt.sock 00:15:17.766 09:56:31 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 61781 ']' 00:15:17.766 09:56:31 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:15:17.766 09:56:31 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:17.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:15:17.766 09:56:31 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:15:17.766 09:56:31 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:17.766 09:56:31 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:15:17.766 [2024-07-15 09:56:31.327184] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:17.767 [2024-07-15 09:56:31.327258] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61781 ] 00:15:18.332 [2024-07-15 09:56:31.699115] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.332 [2024-07-15 09:56:31.785508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.900 09:56:32 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:18.900 09:56:32 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:15:18.900 00:15:18.900 09:56:32 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:15:18.900 INFO: shutting down applications... 00:15:18.900 09:56:32 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:15:18.900 09:56:32 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:15:18.900 09:56:32 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:15:18.900 09:56:32 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:15:18.900 09:56:32 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 61781 ]] 00:15:18.900 09:56:32 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 61781 00:15:18.900 09:56:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:15:18.900 09:56:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:15:18.900 09:56:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 61781 00:15:18.900 09:56:32 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:15:19.159 09:56:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:15:19.159 09:56:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:15:19.159 09:56:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 61781 00:15:19.159 09:56:32 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:15:19.159 09:56:32 json_config_extra_key -- json_config/common.sh@43 -- # break 00:15:19.159 09:56:32 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:15:19.159 SPDK target shutdown done 00:15:19.159 09:56:32 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:15:19.159 Success 00:15:19.159 09:56:32 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:15:19.159 00:15:19.159 real 0m1.572s 00:15:19.159 user 0m1.337s 00:15:19.159 sys 0m0.411s 00:15:19.159 09:56:32 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:19.159 09:56:32 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:15:19.159 ************************************ 00:15:19.159 END TEST json_config_extra_key 00:15:19.159 ************************************ 00:15:19.418 09:56:32 -- common/autotest_common.sh@1142 -- # return 0 00:15:19.418 09:56:32 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:15:19.418 09:56:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:19.418 09:56:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:19.418 09:56:32 -- common/autotest_common.sh@10 -- # set +x 00:15:19.418 ************************************ 00:15:19.418 START TEST alias_rpc 00:15:19.418 ************************************ 00:15:19.418 09:56:32 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:15:19.418 * Looking for test storage... 00:15:19.418 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:15:19.418 09:56:32 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:15:19.418 09:56:32 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=61863 00:15:19.418 09:56:32 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:19.418 09:56:32 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 61863 00:15:19.418 09:56:32 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 61863 ']' 00:15:19.418 09:56:32 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:19.418 09:56:32 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:19.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:19.418 09:56:32 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:19.418 09:56:32 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:19.418 09:56:32 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:19.418 [2024-07-15 09:56:32.978395] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:19.418 [2024-07-15 09:56:32.978466] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61863 ] 00:15:19.678 [2024-07-15 09:56:33.117492] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:19.678 [2024-07-15 09:56:33.223590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.615 09:56:33 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:20.615 09:56:33 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:15:20.615 09:56:33 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:15:20.615 09:56:34 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 61863 00:15:20.615 09:56:34 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 61863 ']' 00:15:20.615 09:56:34 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 61863 00:15:20.615 09:56:34 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:15:20.615 09:56:34 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:20.615 09:56:34 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61863 00:15:20.615 09:56:34 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:20.615 09:56:34 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:20.615 killing process with pid 61863 00:15:20.615 09:56:34 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61863' 00:15:20.615 09:56:34 alias_rpc -- common/autotest_common.sh@967 -- # kill 61863 00:15:20.615 09:56:34 alias_rpc -- common/autotest_common.sh@972 -- # wait 61863 00:15:20.874 00:15:20.874 real 0m1.660s 00:15:20.874 user 0m1.801s 00:15:20.874 sys 0m0.439s 00:15:20.874 09:56:34 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:20.874 09:56:34 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:20.874 ************************************ 00:15:20.874 END TEST alias_rpc 00:15:20.874 ************************************ 00:15:21.133 09:56:34 -- common/autotest_common.sh@1142 -- # return 0 00:15:21.133 09:56:34 -- spdk/autotest.sh@176 -- # [[ 1 -eq 0 ]] 00:15:21.133 09:56:34 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:15:21.133 09:56:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:21.133 09:56:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:21.133 09:56:34 -- common/autotest_common.sh@10 -- # set +x 00:15:21.133 ************************************ 00:15:21.133 START TEST dpdk_mem_utility 00:15:21.133 ************************************ 00:15:21.133 09:56:34 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:15:21.133 * Looking for test storage... 00:15:21.133 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:15:21.133 09:56:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:15:21.133 09:56:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=61950 00:15:21.133 09:56:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:21.133 09:56:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 61950 00:15:21.133 09:56:34 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 61950 ']' 00:15:21.133 09:56:34 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:21.133 09:56:34 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:21.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:21.133 09:56:34 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:21.133 09:56:34 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:21.133 09:56:34 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:15:21.133 [2024-07-15 09:56:34.681620] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:21.133 [2024-07-15 09:56:34.681718] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61950 ] 00:15:21.392 [2024-07-15 09:56:34.819579] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.392 [2024-07-15 09:56:34.925547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.331 09:56:35 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:22.331 09:56:35 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:15:22.331 09:56:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:15:22.331 09:56:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:15:22.331 09:56:35 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.331 09:56:35 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:15:22.331 { 00:15:22.331 "filename": "/tmp/spdk_mem_dump.txt" 00:15:22.331 } 00:15:22.331 09:56:35 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.331 09:56:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:15:22.331 DPDK memory size 814.000000 MiB in 1 heap(s) 00:15:22.331 1 heaps totaling size 814.000000 MiB 00:15:22.331 size: 814.000000 MiB heap id: 0 00:15:22.331 end heaps---------- 00:15:22.331 8 mempools totaling size 598.116089 MiB 00:15:22.331 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:15:22.331 size: 158.602051 MiB name: PDU_data_out_Pool 00:15:22.331 size: 84.521057 MiB name: bdev_io_61950 00:15:22.331 size: 51.011292 MiB name: evtpool_61950 00:15:22.331 size: 50.003479 MiB name: msgpool_61950 00:15:22.331 size: 21.763794 MiB name: PDU_Pool 00:15:22.331 size: 19.513306 MiB name: SCSI_TASK_Pool 00:15:22.331 size: 0.026123 MiB name: Session_Pool 00:15:22.331 end mempools------- 00:15:22.331 6 memzones totaling size 4.142822 MiB 00:15:22.331 size: 1.000366 MiB name: RG_ring_0_61950 00:15:22.331 size: 1.000366 MiB name: RG_ring_1_61950 00:15:22.331 size: 1.000366 MiB name: RG_ring_4_61950 00:15:22.331 size: 1.000366 MiB name: RG_ring_5_61950 00:15:22.331 size: 0.125366 MiB name: RG_ring_2_61950 00:15:22.331 size: 0.015991 MiB name: RG_ring_3_61950 00:15:22.331 end memzones------- 00:15:22.331 09:56:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:15:22.331 heap id: 0 total size: 814.000000 MiB number of busy elements: 226 number of free elements: 15 00:15:22.331 list of free elements. size: 12.485474 MiB 00:15:22.331 element at address: 0x200000400000 with size: 1.999512 MiB 00:15:22.331 element at address: 0x200018e00000 with size: 0.999878 MiB 00:15:22.331 element at address: 0x200019000000 with size: 0.999878 MiB 00:15:22.332 element at address: 0x200003e00000 with size: 0.996277 MiB 00:15:22.332 element at address: 0x200031c00000 with size: 0.994446 MiB 00:15:22.332 element at address: 0x200013800000 with size: 0.978699 MiB 00:15:22.332 element at address: 0x200007000000 with size: 0.959839 MiB 00:15:22.332 element at address: 0x200019200000 with size: 0.936584 MiB 00:15:22.332 element at address: 0x200000200000 with size: 0.837036 MiB 00:15:22.332 element at address: 0x20001aa00000 with size: 0.571899 MiB 00:15:22.332 element at address: 0x20000b200000 with size: 0.489807 MiB 00:15:22.332 element at address: 0x200000800000 with size: 0.487061 MiB 00:15:22.332 element at address: 0x200019400000 with size: 0.485657 MiB 00:15:22.332 element at address: 0x200027e00000 with size: 0.398132 MiB 00:15:22.332 element at address: 0x200003a00000 with size: 0.350769 MiB 00:15:22.332 list of standard malloc elements. size: 199.251953 MiB 00:15:22.332 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:15:22.332 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:15:22.332 element at address: 0x200018efff80 with size: 1.000122 MiB 00:15:22.332 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:15:22.332 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:15:22.332 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:15:22.332 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:15:22.332 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:15:22.332 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:15:22.332 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:15:22.332 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:15:22.332 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:15:22.332 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:15:22.332 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:15:22.332 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:15:22.332 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:15:22.332 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:15:22.332 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:15:22.332 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:15:22.332 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:15:22.332 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:15:22.332 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:15:22.332 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:15:22.332 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:15:22.332 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:15:22.332 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:15:22.332 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:15:22.332 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:15:22.332 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:15:22.332 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:15:22.332 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:15:22.332 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:15:22.332 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:15:22.332 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:15:22.332 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:15:22.332 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:15:22.332 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:15:22.332 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:15:22.332 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:15:22.332 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:15:22.332 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:15:22.332 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:15:22.332 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:15:22.332 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:15:22.332 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:15:22.332 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:15:22.332 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:15:22.332 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:15:22.332 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:15:22.332 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:15:22.332 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:15:22.332 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:15:22.332 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:15:22.332 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:15:22.332 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:15:22.332 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:15:22.332 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:15:22.332 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:15:22.332 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:15:22.332 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:15:22.332 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:15:22.332 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:15:22.332 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:15:22.332 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:15:22.332 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:15:22.332 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:15:22.332 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:15:22.332 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:15:22.332 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:15:22.332 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:15:22.332 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:15:22.332 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:15:22.332 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:15:22.332 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:15:22.332 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:15:22.332 element at address: 0x200003adb300 with size: 0.000183 MiB 00:15:22.332 element at address: 0x200003adb500 with size: 0.000183 MiB 00:15:22.332 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:15:22.332 element at address: 0x200003affa80 with size: 0.000183 MiB 00:15:22.332 element at address: 0x200003affb40 with size: 0.000183 MiB 00:15:22.332 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:15:22.332 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:15:22.332 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:15:22.332 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:15:22.332 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:15:22.332 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:15:22.332 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:15:22.332 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:15:22.332 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:15:22.332 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:15:22.332 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:15:22.332 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:15:22.332 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:15:22.332 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:15:22.332 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:15:22.332 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:15:22.332 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:15:22.332 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:15:22.332 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:15:22.332 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:15:22.332 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:15:22.332 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:15:22.332 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:15:22.332 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:15:22.332 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:15:22.332 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:15:22.332 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:15:22.332 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:15:22.332 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:15:22.332 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:15:22.332 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:15:22.332 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:15:22.332 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:15:22.332 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:15:22.332 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:15:22.332 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:15:22.332 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:15:22.332 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:15:22.332 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:15:22.332 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:15:22.332 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:15:22.332 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:15:22.332 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:15:22.332 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:15:22.332 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:15:22.332 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:15:22.332 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:15:22.332 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:15:22.332 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:15:22.332 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:15:22.332 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:15:22.332 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:15:22.332 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:15:22.332 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:15:22.333 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:15:22.333 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:15:22.333 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:15:22.333 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:15:22.333 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:15:22.333 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:15:22.333 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:15:22.333 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:15:22.333 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:15:22.333 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:15:22.333 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:15:22.333 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:15:22.333 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:15:22.333 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:15:22.333 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:15:22.333 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:15:22.333 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:15:22.333 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:15:22.333 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:15:22.333 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:15:22.333 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:15:22.333 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e65ec0 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e65f80 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6cb80 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:15:22.333 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:15:22.333 list of memzone associated elements. size: 602.262573 MiB 00:15:22.333 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:15:22.333 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:15:22.333 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:15:22.333 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:15:22.333 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:15:22.333 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_61950_0 00:15:22.333 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:15:22.333 associated memzone info: size: 48.002930 MiB name: MP_evtpool_61950_0 00:15:22.333 element at address: 0x200003fff380 with size: 48.003052 MiB 00:15:22.333 associated memzone info: size: 48.002930 MiB name: MP_msgpool_61950_0 00:15:22.333 element at address: 0x2000195be940 with size: 20.255554 MiB 00:15:22.333 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:15:22.333 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:15:22.333 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:15:22.333 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:15:22.333 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_61950 00:15:22.333 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:15:22.333 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_61950 00:15:22.333 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:15:22.333 associated memzone info: size: 1.007996 MiB name: MP_evtpool_61950 00:15:22.333 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:15:22.333 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:15:22.333 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:15:22.333 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:15:22.333 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:15:22.333 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:15:22.333 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:15:22.333 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:15:22.333 element at address: 0x200003eff180 with size: 1.000488 MiB 00:15:22.333 associated memzone info: size: 1.000366 MiB name: RG_ring_0_61950 00:15:22.333 element at address: 0x200003affc00 with size: 1.000488 MiB 00:15:22.333 associated memzone info: size: 1.000366 MiB name: RG_ring_1_61950 00:15:22.333 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:15:22.333 associated memzone info: size: 1.000366 MiB name: RG_ring_4_61950 00:15:22.333 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:15:22.333 associated memzone info: size: 1.000366 MiB name: RG_ring_5_61950 00:15:22.333 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:15:22.333 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_61950 00:15:22.333 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:15:22.333 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:15:22.333 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:15:22.333 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:15:22.333 element at address: 0x20001947c540 with size: 0.250488 MiB 00:15:22.333 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:15:22.333 element at address: 0x200003adf880 with size: 0.125488 MiB 00:15:22.333 associated memzone info: size: 0.125366 MiB name: RG_ring_2_61950 00:15:22.333 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:15:22.333 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:15:22.333 element at address: 0x200027e66040 with size: 0.023743 MiB 00:15:22.333 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:15:22.333 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:15:22.333 associated memzone info: size: 0.015991 MiB name: RG_ring_3_61950 00:15:22.333 element at address: 0x200027e6c180 with size: 0.002441 MiB 00:15:22.334 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:15:22.334 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:15:22.334 associated memzone info: size: 0.000183 MiB name: MP_msgpool_61950 00:15:22.334 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:15:22.334 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_61950 00:15:22.334 element at address: 0x200027e6cc40 with size: 0.000305 MiB 00:15:22.334 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:15:22.334 09:56:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:15:22.334 09:56:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 61950 00:15:22.334 09:56:35 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 61950 ']' 00:15:22.334 09:56:35 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 61950 00:15:22.334 09:56:35 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:15:22.334 09:56:35 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:22.334 09:56:35 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61950 00:15:22.334 09:56:35 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:22.334 09:56:35 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:22.334 killing process with pid 61950 00:15:22.334 09:56:35 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61950' 00:15:22.334 09:56:35 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 61950 00:15:22.334 09:56:35 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 61950 00:15:22.593 00:15:22.593 real 0m1.539s 00:15:22.593 user 0m1.610s 00:15:22.593 sys 0m0.398s 00:15:22.593 09:56:36 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:22.593 09:56:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:15:22.593 ************************************ 00:15:22.593 END TEST dpdk_mem_utility 00:15:22.593 ************************************ 00:15:22.593 09:56:36 -- common/autotest_common.sh@1142 -- # return 0 00:15:22.593 09:56:36 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:15:22.593 09:56:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:22.593 09:56:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:22.593 09:56:36 -- common/autotest_common.sh@10 -- # set +x 00:15:22.593 ************************************ 00:15:22.593 START TEST event 00:15:22.593 ************************************ 00:15:22.593 09:56:36 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:15:22.852 * Looking for test storage... 00:15:22.852 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:15:22.852 09:56:36 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:22.852 09:56:36 event -- bdev/nbd_common.sh@6 -- # set -e 00:15:22.852 09:56:36 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:15:22.852 09:56:36 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:15:22.852 09:56:36 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:22.852 09:56:36 event -- common/autotest_common.sh@10 -- # set +x 00:15:22.852 ************************************ 00:15:22.852 START TEST event_perf 00:15:22.852 ************************************ 00:15:22.852 09:56:36 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:15:22.852 Running I/O for 1 seconds...[2024-07-15 09:56:36.255059] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:22.852 [2024-07-15 09:56:36.255144] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62039 ] 00:15:22.852 [2024-07-15 09:56:36.398176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:23.111 [2024-07-15 09:56:36.506517] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:23.111 [2024-07-15 09:56:36.506618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:23.111 [2024-07-15 09:56:36.506763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.111 [2024-07-15 09:56:36.506768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:24.047 Running I/O for 1 seconds... 00:15:24.047 lcore 0: 180201 00:15:24.047 lcore 1: 180200 00:15:24.047 lcore 2: 180202 00:15:24.047 lcore 3: 180202 00:15:24.047 done. 00:15:24.047 00:15:24.047 real 0m1.367s 00:15:24.047 user 0m4.195s 00:15:24.047 sys 0m0.049s 00:15:24.047 09:56:37 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:24.047 09:56:37 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:15:24.047 ************************************ 00:15:24.047 END TEST event_perf 00:15:24.047 ************************************ 00:15:24.047 09:56:37 event -- common/autotest_common.sh@1142 -- # return 0 00:15:24.047 09:56:37 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:15:24.047 09:56:37 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:15:24.047 09:56:37 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:24.047 09:56:37 event -- common/autotest_common.sh@10 -- # set +x 00:15:24.047 ************************************ 00:15:24.047 START TEST event_reactor 00:15:24.047 ************************************ 00:15:24.047 09:56:37 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:15:24.305 [2024-07-15 09:56:37.638351] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:24.305 [2024-07-15 09:56:37.638439] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62072 ] 00:15:24.305 [2024-07-15 09:56:37.778503] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:24.563 [2024-07-15 09:56:37.893776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.517 test_start 00:15:25.517 oneshot 00:15:25.517 tick 100 00:15:25.517 tick 100 00:15:25.517 tick 250 00:15:25.517 tick 100 00:15:25.517 tick 100 00:15:25.517 tick 250 00:15:25.517 tick 100 00:15:25.517 tick 500 00:15:25.517 tick 100 00:15:25.517 tick 100 00:15:25.517 tick 250 00:15:25.517 tick 100 00:15:25.517 tick 100 00:15:25.517 test_end 00:15:25.517 00:15:25.517 real 0m1.345s 00:15:25.517 user 0m1.185s 00:15:25.517 sys 0m0.055s 00:15:25.518 09:56:38 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:25.518 09:56:38 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:15:25.518 ************************************ 00:15:25.518 END TEST event_reactor 00:15:25.518 ************************************ 00:15:25.518 09:56:39 event -- common/autotest_common.sh@1142 -- # return 0 00:15:25.518 09:56:39 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:15:25.518 09:56:39 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:15:25.518 09:56:39 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:25.518 09:56:39 event -- common/autotest_common.sh@10 -- # set +x 00:15:25.518 ************************************ 00:15:25.518 START TEST event_reactor_perf 00:15:25.518 ************************************ 00:15:25.518 09:56:39 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:15:25.518 [2024-07-15 09:56:39.051869] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:25.518 [2024-07-15 09:56:39.052493] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62113 ] 00:15:25.791 [2024-07-15 09:56:39.193177] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.791 [2024-07-15 09:56:39.300030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:27.173 test_start 00:15:27.173 test_end 00:15:27.173 Performance: 454375 events per second 00:15:27.173 00:15:27.173 real 0m1.348s 00:15:27.173 user 0m1.190s 00:15:27.173 sys 0m0.051s 00:15:27.173 09:56:40 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:27.173 ************************************ 00:15:27.173 END TEST event_reactor_perf 00:15:27.173 09:56:40 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:15:27.173 ************************************ 00:15:27.173 09:56:40 event -- common/autotest_common.sh@1142 -- # return 0 00:15:27.173 09:56:40 event -- event/event.sh@49 -- # uname -s 00:15:27.173 09:56:40 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:15:27.173 09:56:40 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:15:27.173 09:56:40 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:27.173 09:56:40 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:27.173 09:56:40 event -- common/autotest_common.sh@10 -- # set +x 00:15:27.173 ************************************ 00:15:27.173 START TEST event_scheduler 00:15:27.173 ************************************ 00:15:27.173 09:56:40 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:15:27.173 * Looking for test storage... 00:15:27.173 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:15:27.173 09:56:40 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:15:27.173 09:56:40 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=62169 00:15:27.173 09:56:40 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:15:27.173 09:56:40 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:15:27.173 09:56:40 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 62169 00:15:27.173 09:56:40 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 62169 ']' 00:15:27.173 09:56:40 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:27.173 09:56:40 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:27.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:27.173 09:56:40 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:27.173 09:56:40 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:27.173 09:56:40 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:15:27.173 [2024-07-15 09:56:40.599467] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:27.173 [2024-07-15 09:56:40.599554] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62169 ] 00:15:27.173 [2024-07-15 09:56:40.723282] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:27.432 [2024-07-15 09:56:40.832166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:27.432 [2024-07-15 09:56:40.832341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:27.432 [2024-07-15 09:56:40.832343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:27.432 [2024-07-15 09:56:40.832266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:28.001 09:56:41 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:28.001 09:56:41 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:15:28.001 09:56:41 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:15:28.001 09:56:41 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.001 09:56:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:15:28.001 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:15:28.001 POWER: Cannot set governor of lcore 0 to userspace 00:15:28.001 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:15:28.001 POWER: Cannot set governor of lcore 0 to performance 00:15:28.001 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:15:28.001 POWER: Cannot set governor of lcore 0 to userspace 00:15:28.001 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:15:28.001 POWER: Cannot set governor of lcore 0 to userspace 00:15:28.001 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:15:28.001 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:15:28.001 POWER: Unable to set Power Management Environment for lcore 0 00:15:28.001 [2024-07-15 09:56:41.516495] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:15:28.001 [2024-07-15 09:56:41.516509] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:15:28.001 [2024-07-15 09:56:41.516523] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:15:28.001 [2024-07-15 09:56:41.516541] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:15:28.001 [2024-07-15 09:56:41.516547] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:15:28.001 [2024-07-15 09:56:41.516556] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:15:28.001 09:56:41 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.001 09:56:41 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:15:28.001 09:56:41 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.001 09:56:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:15:28.262 [2024-07-15 09:56:41.599435] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:15:28.262 09:56:41 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.262 09:56:41 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:15:28.262 09:56:41 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:28.262 09:56:41 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:28.262 09:56:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:15:28.262 ************************************ 00:15:28.262 START TEST scheduler_create_thread 00:15:28.262 ************************************ 00:15:28.262 09:56:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:15:28.262 09:56:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:15:28.262 09:56:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.262 09:56:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:28.262 2 00:15:28.262 09:56:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.262 09:56:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:15:28.262 09:56:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.262 09:56:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:28.262 3 00:15:28.262 09:56:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.262 09:56:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:15:28.262 09:56:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.262 09:56:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:28.262 4 00:15:28.262 09:56:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.262 09:56:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:15:28.262 09:56:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.262 09:56:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:28.262 5 00:15:28.262 09:56:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.262 09:56:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:15:28.262 09:56:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.262 09:56:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:28.262 6 00:15:28.262 09:56:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.262 09:56:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:15:28.262 09:56:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.262 09:56:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:28.262 7 00:15:28.262 09:56:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.262 09:56:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:15:28.262 09:56:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.262 09:56:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:28.262 8 00:15:28.262 09:56:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.262 09:56:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:15:28.262 09:56:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.262 09:56:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:28.262 9 00:15:28.262 09:56:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.262 09:56:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:15:28.262 09:56:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.262 09:56:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:28.830 10 00:15:28.830 09:56:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.831 09:56:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:15:28.831 09:56:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.831 09:56:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:30.210 09:56:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.210 09:56:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:15:30.210 09:56:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:15:30.210 09:56:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.210 09:56:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:30.777 09:56:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.777 09:56:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:15:30.777 09:56:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.777 09:56:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:31.711 09:56:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.711 09:56:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:15:31.711 09:56:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:15:31.711 09:56:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.711 09:56:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:32.277 09:56:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.277 00:15:32.277 real 0m4.213s 00:15:32.277 user 0m0.029s 00:15:32.277 sys 0m0.004s 00:15:32.277 09:56:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:32.277 09:56:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:32.277 ************************************ 00:15:32.277 END TEST scheduler_create_thread 00:15:32.277 ************************************ 00:15:32.535 09:56:45 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:15:32.535 09:56:45 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:15:32.535 09:56:45 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 62169 00:15:32.535 09:56:45 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 62169 ']' 00:15:32.535 09:56:45 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 62169 00:15:32.535 09:56:45 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:15:32.535 09:56:45 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:32.535 09:56:45 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62169 00:15:32.535 09:56:45 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:32.535 09:56:45 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:32.535 09:56:45 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62169' 00:15:32.535 killing process with pid 62169 00:15:32.535 09:56:45 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 62169 00:15:32.535 09:56:45 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 62169 00:15:32.794 [2024-07-15 09:56:46.203612] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:15:33.053 00:15:33.053 real 0m6.030s 00:15:33.053 user 0m13.957s 00:15:33.053 sys 0m0.374s 00:15:33.053 09:56:46 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:33.053 09:56:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:15:33.053 ************************************ 00:15:33.053 END TEST event_scheduler 00:15:33.053 ************************************ 00:15:33.053 09:56:46 event -- common/autotest_common.sh@1142 -- # return 0 00:15:33.053 09:56:46 event -- event/event.sh@51 -- # modprobe -n nbd 00:15:33.053 09:56:46 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:15:33.053 09:56:46 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:33.053 09:56:46 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:33.053 09:56:46 event -- common/autotest_common.sh@10 -- # set +x 00:15:33.053 ************************************ 00:15:33.053 START TEST app_repeat 00:15:33.053 ************************************ 00:15:33.053 09:56:46 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:15:33.053 09:56:46 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:33.053 09:56:46 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:33.053 09:56:46 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:15:33.053 09:56:46 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:15:33.053 09:56:46 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:15:33.053 09:56:46 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:15:33.053 09:56:46 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:15:33.053 09:56:46 event.app_repeat -- event/event.sh@19 -- # repeat_pid=62303 00:15:33.053 09:56:46 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:15:33.053 09:56:46 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:15:33.053 Process app_repeat pid: 62303 00:15:33.053 09:56:46 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 62303' 00:15:33.053 09:56:46 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:15:33.053 spdk_app_start Round 0 00:15:33.053 09:56:46 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:15:33.053 09:56:46 event.app_repeat -- event/event.sh@25 -- # waitforlisten 62303 /var/tmp/spdk-nbd.sock 00:15:33.053 09:56:46 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 62303 ']' 00:15:33.053 09:56:46 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:33.053 09:56:46 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:33.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:33.053 09:56:46 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:33.053 09:56:46 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:33.053 09:56:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:15:33.053 [2024-07-15 09:56:46.583320] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:33.053 [2024-07-15 09:56:46.583408] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62303 ] 00:15:33.312 [2024-07-15 09:56:46.725803] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:33.312 [2024-07-15 09:56:46.830396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.312 [2024-07-15 09:56:46.830396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:34.247 09:56:47 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:34.247 09:56:47 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:15:34.247 09:56:47 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:15:34.247 Malloc0 00:15:34.247 09:56:47 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:15:34.506 Malloc1 00:15:34.506 09:56:47 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:15:34.506 09:56:47 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:34.506 09:56:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:15:34.506 09:56:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:34.506 09:56:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:34.506 09:56:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:34.506 09:56:47 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:15:34.506 09:56:47 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:34.507 09:56:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:15:34.507 09:56:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:34.507 09:56:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:34.507 09:56:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:34.507 09:56:47 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:15:34.507 09:56:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:34.507 09:56:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:34.507 09:56:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:15:34.768 /dev/nbd0 00:15:34.768 09:56:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:34.768 09:56:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:34.768 09:56:48 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:15:34.768 09:56:48 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:15:34.768 09:56:48 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:34.768 09:56:48 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:34.768 09:56:48 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:15:34.768 09:56:48 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:15:34.768 09:56:48 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:34.768 09:56:48 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:34.768 09:56:48 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:15:34.768 1+0 records in 00:15:34.768 1+0 records out 00:15:34.768 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000440849 s, 9.3 MB/s 00:15:34.768 09:56:48 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:34.768 09:56:48 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:15:34.768 09:56:48 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:34.768 09:56:48 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:34.768 09:56:48 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:15:34.768 09:56:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:34.768 09:56:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:34.768 09:56:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:15:35.032 /dev/nbd1 00:15:35.032 09:56:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:35.032 09:56:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:35.032 09:56:48 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:15:35.032 09:56:48 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:15:35.032 09:56:48 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:35.032 09:56:48 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:35.032 09:56:48 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:15:35.032 09:56:48 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:15:35.032 09:56:48 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:35.032 09:56:48 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:35.032 09:56:48 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:15:35.032 1+0 records in 00:15:35.032 1+0 records out 00:15:35.032 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00041215 s, 9.9 MB/s 00:15:35.032 09:56:48 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:35.032 09:56:48 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:15:35.033 09:56:48 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:35.033 09:56:48 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:35.033 09:56:48 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:15:35.033 09:56:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:35.033 09:56:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:35.033 09:56:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:35.033 09:56:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:35.033 09:56:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:35.299 09:56:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:35.299 { 00:15:35.299 "bdev_name": "Malloc0", 00:15:35.299 "nbd_device": "/dev/nbd0" 00:15:35.299 }, 00:15:35.299 { 00:15:35.299 "bdev_name": "Malloc1", 00:15:35.299 "nbd_device": "/dev/nbd1" 00:15:35.299 } 00:15:35.299 ]' 00:15:35.299 09:56:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:35.299 { 00:15:35.299 "bdev_name": "Malloc0", 00:15:35.299 "nbd_device": "/dev/nbd0" 00:15:35.299 }, 00:15:35.299 { 00:15:35.299 "bdev_name": "Malloc1", 00:15:35.299 "nbd_device": "/dev/nbd1" 00:15:35.299 } 00:15:35.299 ]' 00:15:35.299 09:56:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:35.299 09:56:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:35.299 /dev/nbd1' 00:15:35.299 09:56:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:35.299 /dev/nbd1' 00:15:35.299 09:56:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:35.299 09:56:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:15:35.299 09:56:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:15:35.299 09:56:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:15:35.299 09:56:48 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:15:35.299 09:56:48 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:15:35.300 09:56:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:35.300 09:56:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:35.300 09:56:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:35.300 09:56:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:35.300 09:56:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:35.300 09:56:48 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:15:35.300 256+0 records in 00:15:35.300 256+0 records out 00:15:35.300 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00638454 s, 164 MB/s 00:15:35.300 09:56:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:35.300 09:56:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:35.300 256+0 records in 00:15:35.300 256+0 records out 00:15:35.300 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0231597 s, 45.3 MB/s 00:15:35.300 09:56:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:35.300 09:56:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:35.300 256+0 records in 00:15:35.300 256+0 records out 00:15:35.300 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0239624 s, 43.8 MB/s 00:15:35.300 09:56:48 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:15:35.300 09:56:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:35.300 09:56:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:35.300 09:56:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:35.300 09:56:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:35.300 09:56:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:35.300 09:56:48 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:35.300 09:56:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:35.300 09:56:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:15:35.300 09:56:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:35.300 09:56:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:15:35.300 09:56:48 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:35.300 09:56:48 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:15:35.300 09:56:48 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:35.300 09:56:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:35.300 09:56:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:35.300 09:56:48 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:15:35.300 09:56:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:35.300 09:56:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:35.565 09:56:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:35.565 09:56:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:35.565 09:56:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:35.565 09:56:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:35.565 09:56:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:35.565 09:56:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:35.565 09:56:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:15:35.565 09:56:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:15:35.565 09:56:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:35.565 09:56:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:35.831 09:56:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:35.831 09:56:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:35.831 09:56:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:35.831 09:56:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:35.831 09:56:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:35.831 09:56:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:35.831 09:56:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:15:35.831 09:56:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:15:35.831 09:56:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:35.831 09:56:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:35.831 09:56:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:36.100 09:56:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:36.100 09:56:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:36.100 09:56:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:36.100 09:56:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:36.100 09:56:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:15:36.100 09:56:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:36.100 09:56:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:15:36.100 09:56:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:15:36.100 09:56:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:15:36.100 09:56:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:15:36.100 09:56:49 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:36.100 09:56:49 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:15:36.100 09:56:49 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:15:36.370 09:56:49 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:15:36.370 [2024-07-15 09:56:49.931620] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:36.633 [2024-07-15 09:56:50.032544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:36.633 [2024-07-15 09:56:50.032545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.633 [2024-07-15 09:56:50.074193] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:15:36.633 [2024-07-15 09:56:50.074238] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:15:39.924 spdk_app_start Round 1 00:15:39.924 09:56:52 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:15:39.924 09:56:52 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:15:39.924 09:56:52 event.app_repeat -- event/event.sh@25 -- # waitforlisten 62303 /var/tmp/spdk-nbd.sock 00:15:39.924 09:56:52 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 62303 ']' 00:15:39.924 09:56:52 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:39.924 09:56:52 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:39.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:39.924 09:56:52 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:39.924 09:56:52 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:39.924 09:56:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:15:39.924 09:56:52 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:39.924 09:56:52 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:15:39.924 09:56:52 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:15:39.924 Malloc0 00:15:39.924 09:56:53 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:15:39.924 Malloc1 00:15:39.924 09:56:53 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:15:39.924 09:56:53 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:39.924 09:56:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:15:39.924 09:56:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:39.924 09:56:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:39.924 09:56:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:39.924 09:56:53 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:15:39.924 09:56:53 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:39.924 09:56:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:15:39.924 09:56:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:39.924 09:56:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:39.924 09:56:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:39.924 09:56:53 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:15:39.924 09:56:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:39.924 09:56:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:39.924 09:56:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:15:40.183 /dev/nbd0 00:15:40.184 09:56:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:40.184 09:56:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:40.184 09:56:53 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:15:40.184 09:56:53 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:15:40.184 09:56:53 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:40.184 09:56:53 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:40.184 09:56:53 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:15:40.184 09:56:53 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:15:40.184 09:56:53 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:40.184 09:56:53 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:40.184 09:56:53 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:15:40.184 1+0 records in 00:15:40.184 1+0 records out 00:15:40.184 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000444538 s, 9.2 MB/s 00:15:40.184 09:56:53 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:40.184 09:56:53 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:15:40.184 09:56:53 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:40.184 09:56:53 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:40.184 09:56:53 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:15:40.184 09:56:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:40.184 09:56:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:40.184 09:56:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:15:40.446 /dev/nbd1 00:15:40.446 09:56:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:40.446 09:56:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:40.446 09:56:53 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:15:40.446 09:56:53 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:15:40.446 09:56:53 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:40.446 09:56:53 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:40.446 09:56:53 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:15:40.446 09:56:53 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:15:40.446 09:56:53 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:40.446 09:56:53 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:40.446 09:56:53 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:15:40.446 1+0 records in 00:15:40.446 1+0 records out 00:15:40.447 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000369753 s, 11.1 MB/s 00:15:40.447 09:56:53 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:40.447 09:56:53 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:15:40.447 09:56:53 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:40.447 09:56:53 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:40.447 09:56:53 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:15:40.447 09:56:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:40.447 09:56:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:40.447 09:56:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:40.447 09:56:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:40.447 09:56:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:40.713 09:56:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:40.713 { 00:15:40.713 "bdev_name": "Malloc0", 00:15:40.713 "nbd_device": "/dev/nbd0" 00:15:40.713 }, 00:15:40.713 { 00:15:40.713 "bdev_name": "Malloc1", 00:15:40.713 "nbd_device": "/dev/nbd1" 00:15:40.713 } 00:15:40.713 ]' 00:15:40.713 09:56:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:40.713 { 00:15:40.713 "bdev_name": "Malloc0", 00:15:40.713 "nbd_device": "/dev/nbd0" 00:15:40.713 }, 00:15:40.713 { 00:15:40.713 "bdev_name": "Malloc1", 00:15:40.713 "nbd_device": "/dev/nbd1" 00:15:40.713 } 00:15:40.713 ]' 00:15:40.713 09:56:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:40.713 09:56:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:40.713 /dev/nbd1' 00:15:40.984 09:56:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:40.984 /dev/nbd1' 00:15:40.984 09:56:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:40.984 09:56:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:15:40.984 09:56:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:15:40.984 09:56:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:15:40.984 09:56:54 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:15:40.984 09:56:54 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:15:40.984 09:56:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:40.984 09:56:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:40.984 09:56:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:40.984 09:56:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:40.984 09:56:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:40.984 09:56:54 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:15:40.984 256+0 records in 00:15:40.984 256+0 records out 00:15:40.984 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00532401 s, 197 MB/s 00:15:40.984 09:56:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:40.984 09:56:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:40.984 256+0 records in 00:15:40.984 256+0 records out 00:15:40.984 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0227218 s, 46.1 MB/s 00:15:40.984 09:56:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:40.984 09:56:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:40.984 256+0 records in 00:15:40.984 256+0 records out 00:15:40.984 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0259861 s, 40.4 MB/s 00:15:40.984 09:56:54 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:15:40.984 09:56:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:40.984 09:56:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:40.985 09:56:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:40.985 09:56:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:40.985 09:56:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:40.985 09:56:54 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:40.985 09:56:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:40.985 09:56:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:15:40.985 09:56:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:40.985 09:56:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:15:40.985 09:56:54 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:40.985 09:56:54 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:15:40.985 09:56:54 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:40.985 09:56:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:40.985 09:56:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:40.985 09:56:54 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:15:40.985 09:56:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:40.985 09:56:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:41.257 09:56:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:41.257 09:56:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:41.257 09:56:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:41.257 09:56:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:41.257 09:56:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:41.257 09:56:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:41.257 09:56:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:15:41.257 09:56:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:15:41.257 09:56:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:41.257 09:56:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:41.257 09:56:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:41.257 09:56:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:41.257 09:56:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:41.257 09:56:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:41.257 09:56:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:41.257 09:56:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:41.257 09:56:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:15:41.257 09:56:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:15:41.257 09:56:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:41.257 09:56:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:41.257 09:56:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:41.519 09:56:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:41.519 09:56:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:41.519 09:56:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:41.519 09:56:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:41.519 09:56:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:15:41.519 09:56:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:41.519 09:56:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:15:41.519 09:56:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:15:41.519 09:56:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:15:41.519 09:56:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:15:41.519 09:56:55 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:41.519 09:56:55 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:15:41.519 09:56:55 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:15:41.777 09:56:55 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:15:42.037 [2024-07-15 09:56:55.469117] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:42.037 [2024-07-15 09:56:55.563037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.037 [2024-07-15 09:56:55.563037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:42.037 [2024-07-15 09:56:55.604178] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:15:42.037 [2024-07-15 09:56:55.604218] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:15:45.348 spdk_app_start Round 2 00:15:45.348 09:56:58 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:15:45.348 09:56:58 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:15:45.348 09:56:58 event.app_repeat -- event/event.sh@25 -- # waitforlisten 62303 /var/tmp/spdk-nbd.sock 00:15:45.348 09:56:58 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 62303 ']' 00:15:45.348 09:56:58 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:45.348 09:56:58 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:45.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:45.348 09:56:58 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:45.348 09:56:58 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:45.348 09:56:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:15:45.348 09:56:58 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:45.348 09:56:58 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:15:45.348 09:56:58 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:15:45.348 Malloc0 00:15:45.348 09:56:58 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:15:45.608 Malloc1 00:15:45.608 09:56:58 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:15:45.608 09:56:58 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:45.608 09:56:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:15:45.608 09:56:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:45.608 09:56:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:45.608 09:56:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:45.608 09:56:58 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:15:45.608 09:56:58 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:45.608 09:56:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:15:45.608 09:56:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:45.608 09:56:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:45.608 09:56:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:45.608 09:56:58 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:15:45.608 09:56:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:45.608 09:56:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:45.608 09:56:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:15:45.608 /dev/nbd0 00:15:45.867 09:56:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:45.867 09:56:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:45.867 09:56:59 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:15:45.867 09:56:59 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:15:45.867 09:56:59 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:45.867 09:56:59 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:45.867 09:56:59 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:15:45.867 09:56:59 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:15:45.867 09:56:59 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:45.867 09:56:59 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:45.867 09:56:59 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:15:45.867 1+0 records in 00:15:45.867 1+0 records out 00:15:45.867 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000221298 s, 18.5 MB/s 00:15:45.867 09:56:59 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:45.867 09:56:59 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:15:45.867 09:56:59 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:45.867 09:56:59 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:45.867 09:56:59 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:15:45.867 09:56:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:45.867 09:56:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:45.867 09:56:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:15:45.867 /dev/nbd1 00:15:45.867 09:56:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:45.867 09:56:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:45.867 09:56:59 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:15:45.867 09:56:59 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:15:45.868 09:56:59 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:45.868 09:56:59 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:45.868 09:56:59 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:15:46.128 09:56:59 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:15:46.128 09:56:59 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:46.128 09:56:59 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:46.128 09:56:59 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:15:46.128 1+0 records in 00:15:46.128 1+0 records out 00:15:46.128 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000408761 s, 10.0 MB/s 00:15:46.128 09:56:59 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:46.128 09:56:59 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:15:46.128 09:56:59 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:46.128 09:56:59 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:46.128 09:56:59 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:15:46.128 09:56:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:46.128 09:56:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:46.128 09:56:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:46.128 09:56:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:46.128 09:56:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:46.128 09:56:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:46.128 { 00:15:46.128 "bdev_name": "Malloc0", 00:15:46.128 "nbd_device": "/dev/nbd0" 00:15:46.128 }, 00:15:46.128 { 00:15:46.128 "bdev_name": "Malloc1", 00:15:46.128 "nbd_device": "/dev/nbd1" 00:15:46.128 } 00:15:46.128 ]' 00:15:46.128 09:56:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:46.128 { 00:15:46.128 "bdev_name": "Malloc0", 00:15:46.128 "nbd_device": "/dev/nbd0" 00:15:46.128 }, 00:15:46.128 { 00:15:46.128 "bdev_name": "Malloc1", 00:15:46.128 "nbd_device": "/dev/nbd1" 00:15:46.128 } 00:15:46.128 ]' 00:15:46.128 09:56:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:46.388 09:56:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:46.388 /dev/nbd1' 00:15:46.388 09:56:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:46.388 /dev/nbd1' 00:15:46.388 09:56:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:46.388 09:56:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:15:46.388 09:56:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:15:46.388 09:56:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:15:46.388 09:56:59 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:15:46.388 09:56:59 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:15:46.388 09:56:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:46.388 09:56:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:46.388 09:56:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:46.388 09:56:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:46.388 09:56:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:46.388 09:56:59 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:15:46.388 256+0 records in 00:15:46.388 256+0 records out 00:15:46.388 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122239 s, 85.8 MB/s 00:15:46.388 09:56:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:46.388 09:56:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:46.388 256+0 records in 00:15:46.388 256+0 records out 00:15:46.388 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0226427 s, 46.3 MB/s 00:15:46.388 09:56:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:46.388 09:56:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:46.388 256+0 records in 00:15:46.388 256+0 records out 00:15:46.388 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0220514 s, 47.6 MB/s 00:15:46.388 09:56:59 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:15:46.388 09:56:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:46.388 09:56:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:46.388 09:56:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:46.388 09:56:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:46.388 09:56:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:46.388 09:56:59 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:46.388 09:56:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:46.388 09:56:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:15:46.388 09:56:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:46.388 09:56:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:15:46.388 09:56:59 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:46.388 09:56:59 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:15:46.388 09:56:59 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:46.388 09:56:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:46.388 09:56:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:46.388 09:56:59 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:15:46.388 09:56:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:46.388 09:56:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:46.646 09:57:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:46.646 09:57:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:46.646 09:57:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:46.646 09:57:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:46.646 09:57:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:46.646 09:57:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:46.646 09:57:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:15:46.646 09:57:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:15:46.646 09:57:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:46.646 09:57:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:46.906 09:57:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:46.906 09:57:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:46.906 09:57:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:46.906 09:57:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:46.906 09:57:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:46.906 09:57:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:46.906 09:57:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:15:46.906 09:57:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:15:46.906 09:57:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:46.906 09:57:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:46.906 09:57:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:47.165 09:57:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:47.165 09:57:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:47.165 09:57:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:47.165 09:57:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:47.165 09:57:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:15:47.165 09:57:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:47.165 09:57:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:15:47.165 09:57:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:15:47.165 09:57:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:15:47.165 09:57:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:15:47.165 09:57:00 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:47.165 09:57:00 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:15:47.165 09:57:00 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:15:47.426 09:57:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:15:47.426 [2024-07-15 09:57:00.945008] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:47.686 [2024-07-15 09:57:01.047160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:47.686 [2024-07-15 09:57:01.047162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.686 [2024-07-15 09:57:01.088588] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:15:47.686 [2024-07-15 09:57:01.088636] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:15:50.219 09:57:03 event.app_repeat -- event/event.sh@38 -- # waitforlisten 62303 /var/tmp/spdk-nbd.sock 00:15:50.219 09:57:03 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 62303 ']' 00:15:50.219 09:57:03 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:50.219 09:57:03 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:50.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:50.219 09:57:03 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:50.219 09:57:03 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:50.219 09:57:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:15:50.477 09:57:04 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:50.477 09:57:04 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:15:50.477 09:57:04 event.app_repeat -- event/event.sh@39 -- # killprocess 62303 00:15:50.477 09:57:04 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 62303 ']' 00:15:50.477 09:57:04 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 62303 00:15:50.477 09:57:04 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:15:50.477 09:57:04 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:50.477 09:57:04 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62303 00:15:50.736 09:57:04 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:50.736 09:57:04 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:50.736 killing process with pid 62303 00:15:50.736 09:57:04 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62303' 00:15:50.736 09:57:04 event.app_repeat -- common/autotest_common.sh@967 -- # kill 62303 00:15:50.736 09:57:04 event.app_repeat -- common/autotest_common.sh@972 -- # wait 62303 00:15:50.736 spdk_app_start is called in Round 0. 00:15:50.736 Shutdown signal received, stop current app iteration 00:15:50.736 Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 reinitialization... 00:15:50.736 spdk_app_start is called in Round 1. 00:15:50.736 Shutdown signal received, stop current app iteration 00:15:50.736 Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 reinitialization... 00:15:50.736 spdk_app_start is called in Round 2. 00:15:50.736 Shutdown signal received, stop current app iteration 00:15:50.736 Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 reinitialization... 00:15:50.736 spdk_app_start is called in Round 3. 00:15:50.736 Shutdown signal received, stop current app iteration 00:15:50.736 09:57:04 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:15:50.736 09:57:04 event.app_repeat -- event/event.sh@42 -- # return 0 00:15:50.736 00:15:50.736 real 0m17.716s 00:15:50.736 user 0m38.950s 00:15:50.736 sys 0m2.849s 00:15:50.736 09:57:04 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:50.736 09:57:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:15:50.736 ************************************ 00:15:50.736 END TEST app_repeat 00:15:50.736 ************************************ 00:15:50.736 09:57:04 event -- common/autotest_common.sh@1142 -- # return 0 00:15:50.736 09:57:04 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:15:50.736 09:57:04 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:15:50.736 09:57:04 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:50.736 09:57:04 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:50.736 09:57:04 event -- common/autotest_common.sh@10 -- # set +x 00:15:50.736 ************************************ 00:15:50.736 START TEST cpu_locks 00:15:50.736 ************************************ 00:15:50.996 09:57:04 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:15:50.996 * Looking for test storage... 00:15:50.996 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:15:50.996 09:57:04 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:15:50.996 09:57:04 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:15:50.996 09:57:04 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:15:50.996 09:57:04 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:15:50.996 09:57:04 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:50.996 09:57:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:50.996 09:57:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:15:50.996 ************************************ 00:15:50.996 START TEST default_locks 00:15:50.996 ************************************ 00:15:50.996 09:57:04 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:15:50.996 09:57:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=62919 00:15:50.996 09:57:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:15:50.996 09:57:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 62919 00:15:50.996 09:57:04 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 62919 ']' 00:15:50.996 09:57:04 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.996 09:57:04 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:50.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:50.996 09:57:04 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.996 09:57:04 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:50.996 09:57:04 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:15:50.996 [2024-07-15 09:57:04.511351] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:50.996 [2024-07-15 09:57:04.511425] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62919 ] 00:15:51.255 [2024-07-15 09:57:04.650320] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.255 [2024-07-15 09:57:04.760403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:52.228 09:57:05 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:52.228 09:57:05 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:15:52.228 09:57:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 62919 00:15:52.228 09:57:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 62919 00:15:52.228 09:57:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:15:52.487 09:57:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 62919 00:15:52.487 09:57:05 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 62919 ']' 00:15:52.487 09:57:05 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 62919 00:15:52.487 09:57:05 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:15:52.487 09:57:05 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:52.487 09:57:05 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62919 00:15:52.487 09:57:05 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:52.487 09:57:05 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:52.487 killing process with pid 62919 00:15:52.487 09:57:05 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62919' 00:15:52.487 09:57:05 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 62919 00:15:52.487 09:57:05 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 62919 00:15:52.746 09:57:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 62919 00:15:52.746 09:57:06 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:15:52.746 09:57:06 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 62919 00:15:52.746 09:57:06 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:15:52.746 09:57:06 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:52.746 09:57:06 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:15:52.746 09:57:06 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:52.746 09:57:06 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 62919 00:15:52.746 09:57:06 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 62919 ']' 00:15:52.746 09:57:06 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.746 09:57:06 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:52.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.746 09:57:06 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.746 09:57:06 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:52.746 09:57:06 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:15:52.746 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (62919) - No such process 00:15:52.746 ERROR: process (pid: 62919) is no longer running 00:15:52.746 09:57:06 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:52.746 09:57:06 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:15:52.746 09:57:06 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:15:52.746 09:57:06 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:52.746 09:57:06 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:52.746 09:57:06 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:52.746 09:57:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:15:52.746 09:57:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:15:52.746 09:57:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:15:52.746 09:57:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:15:52.746 00:15:52.746 real 0m1.789s 00:15:52.746 user 0m1.894s 00:15:52.746 sys 0m0.570s 00:15:52.746 09:57:06 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:52.746 09:57:06 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:15:52.746 ************************************ 00:15:52.746 END TEST default_locks 00:15:52.746 ************************************ 00:15:52.746 09:57:06 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:15:52.746 09:57:06 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:15:52.746 09:57:06 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:52.746 09:57:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:52.746 09:57:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:15:52.746 ************************************ 00:15:52.746 START TEST default_locks_via_rpc 00:15:52.746 ************************************ 00:15:52.746 09:57:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:15:52.746 09:57:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=62977 00:15:52.746 09:57:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:15:52.746 09:57:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 62977 00:15:52.746 09:57:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 62977 ']' 00:15:52.746 09:57:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.746 09:57:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:52.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.746 09:57:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.746 09:57:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:52.746 09:57:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:53.007 [2024-07-15 09:57:06.350247] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:53.007 [2024-07-15 09:57:06.350332] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62977 ] 00:15:53.007 [2024-07-15 09:57:06.476789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.007 [2024-07-15 09:57:06.586748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.938 09:57:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:53.938 09:57:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:15:53.938 09:57:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:15:53.938 09:57:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.938 09:57:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:53.938 09:57:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.938 09:57:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:15:53.938 09:57:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:15:53.938 09:57:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:15:53.938 09:57:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:15:53.938 09:57:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:15:53.938 09:57:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.938 09:57:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:53.938 09:57:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.938 09:57:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 62977 00:15:53.938 09:57:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 62977 00:15:53.938 09:57:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:15:54.196 09:57:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 62977 00:15:54.196 09:57:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 62977 ']' 00:15:54.196 09:57:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 62977 00:15:54.196 09:57:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:15:54.196 09:57:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:54.196 09:57:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62977 00:15:54.196 09:57:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:54.196 09:57:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:54.196 killing process with pid 62977 00:15:54.196 09:57:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62977' 00:15:54.196 09:57:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 62977 00:15:54.196 09:57:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 62977 00:15:54.454 00:15:54.454 real 0m1.713s 00:15:54.454 user 0m1.849s 00:15:54.454 sys 0m0.492s 00:15:54.454 09:57:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:54.454 09:57:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.454 ************************************ 00:15:54.454 END TEST default_locks_via_rpc 00:15:54.454 ************************************ 00:15:54.713 09:57:08 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:15:54.713 09:57:08 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:15:54.713 09:57:08 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:54.713 09:57:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:54.713 09:57:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:15:54.713 ************************************ 00:15:54.713 START TEST non_locking_app_on_locked_coremask 00:15:54.713 ************************************ 00:15:54.713 09:57:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:15:54.713 09:57:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=63041 00:15:54.713 09:57:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:15:54.713 09:57:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 63041 /var/tmp/spdk.sock 00:15:54.713 09:57:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63041 ']' 00:15:54.713 09:57:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:54.713 09:57:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:54.713 09:57:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:54.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:54.713 09:57:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:54.713 09:57:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:15:54.713 [2024-07-15 09:57:08.138190] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:54.713 [2024-07-15 09:57:08.138254] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63041 ] 00:15:54.713 [2024-07-15 09:57:08.275953] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.972 [2024-07-15 09:57:08.377232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.538 09:57:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:55.538 09:57:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:15:55.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:15:55.538 09:57:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=63069 00:15:55.538 09:57:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 63069 /var/tmp/spdk2.sock 00:15:55.538 09:57:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63069 ']' 00:15:55.538 09:57:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:15:55.538 09:57:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:55.538 09:57:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:15:55.538 09:57:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:55.538 09:57:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:15:55.538 09:57:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:15:55.538 [2024-07-15 09:57:09.045207] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:55.538 [2024-07-15 09:57:09.045360] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63069 ] 00:15:55.797 [2024-07-15 09:57:09.180536] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:15:55.797 [2024-07-15 09:57:09.180574] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.056 [2024-07-15 09:57:09.389092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.315 09:57:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:56.315 09:57:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:15:56.315 09:57:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 63041 00:15:56.315 09:57:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:15:56.315 09:57:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 63041 00:15:56.889 09:57:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 63041 00:15:56.889 09:57:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63041 ']' 00:15:56.889 09:57:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 63041 00:15:56.889 09:57:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:15:56.889 09:57:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:56.889 09:57:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63041 00:15:56.889 killing process with pid 63041 00:15:56.889 09:57:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:56.889 09:57:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:56.889 09:57:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63041' 00:15:56.889 09:57:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 63041 00:15:56.889 09:57:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 63041 00:15:57.456 09:57:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 63069 00:15:57.456 09:57:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63069 ']' 00:15:57.456 09:57:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 63069 00:15:57.456 09:57:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:15:57.456 09:57:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:57.456 09:57:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63069 00:15:57.456 killing process with pid 63069 00:15:57.456 09:57:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:57.456 09:57:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:57.456 09:57:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63069' 00:15:57.456 09:57:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 63069 00:15:57.456 09:57:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 63069 00:15:58.022 00:15:58.022 real 0m3.260s 00:15:58.022 user 0m3.524s 00:15:58.022 sys 0m0.887s 00:15:58.022 09:57:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:58.022 09:57:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:15:58.022 ************************************ 00:15:58.022 END TEST non_locking_app_on_locked_coremask 00:15:58.022 ************************************ 00:15:58.022 09:57:11 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:15:58.022 09:57:11 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:15:58.022 09:57:11 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:58.022 09:57:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:58.022 09:57:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:15:58.022 ************************************ 00:15:58.022 START TEST locking_app_on_unlocked_coremask 00:15:58.022 ************************************ 00:15:58.022 09:57:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:15:58.022 09:57:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=63138 00:15:58.022 09:57:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 63138 /var/tmp/spdk.sock 00:15:58.022 09:57:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:15:58.022 09:57:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63138 ']' 00:15:58.023 09:57:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.023 09:57:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:58.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:58.023 09:57:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.023 09:57:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:58.023 09:57:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:15:58.023 [2024-07-15 09:57:11.464062] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:58.023 [2024-07-15 09:57:11.464136] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63138 ] 00:15:58.023 [2024-07-15 09:57:11.602153] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:15:58.023 [2024-07-15 09:57:11.602219] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.280 [2024-07-15 09:57:11.709833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.846 09:57:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:58.846 09:57:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:15:58.846 09:57:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=63165 00:15:58.846 09:57:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 63165 /var/tmp/spdk2.sock 00:15:58.846 09:57:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63165 ']' 00:15:58.846 09:57:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:15:58.846 09:57:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:58.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:15:58.846 09:57:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:15:58.846 09:57:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:58.846 09:57:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:15:58.846 09:57:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:15:58.847 [2024-07-15 09:57:12.412175] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:58.847 [2024-07-15 09:57:12.412248] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63165 ] 00:15:59.107 [2024-07-15 09:57:12.543224] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.369 [2024-07-15 09:57:12.762795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.936 09:57:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:59.936 09:57:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:15:59.936 09:57:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 63165 00:15:59.936 09:57:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 63165 00:15:59.936 09:57:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:16:00.195 09:57:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 63138 00:16:00.195 09:57:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63138 ']' 00:16:00.195 09:57:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 63138 00:16:00.195 09:57:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:16:00.195 09:57:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:00.195 09:57:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63138 00:16:00.195 09:57:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:00.195 09:57:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:00.195 killing process with pid 63138 00:16:00.195 09:57:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63138' 00:16:00.195 09:57:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 63138 00:16:00.195 09:57:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 63138 00:16:01.133 09:57:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 63165 00:16:01.133 09:57:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63165 ']' 00:16:01.133 09:57:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 63165 00:16:01.133 09:57:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:16:01.133 09:57:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:01.133 09:57:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63165 00:16:01.133 09:57:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:01.133 killing process with pid 63165 00:16:01.133 09:57:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:01.133 09:57:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63165' 00:16:01.133 09:57:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 63165 00:16:01.133 09:57:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 63165 00:16:01.392 00:16:01.392 real 0m3.350s 00:16:01.392 user 0m3.608s 00:16:01.392 sys 0m0.947s 00:16:01.392 09:57:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:01.392 09:57:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:01.392 ************************************ 00:16:01.392 END TEST locking_app_on_unlocked_coremask 00:16:01.392 ************************************ 00:16:01.392 09:57:14 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:16:01.392 09:57:14 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:16:01.392 09:57:14 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:01.392 09:57:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:01.392 09:57:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:16:01.392 ************************************ 00:16:01.392 START TEST locking_app_on_locked_coremask 00:16:01.392 ************************************ 00:16:01.392 09:57:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:16:01.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:01.392 09:57:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=63244 00:16:01.392 09:57:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 63244 /var/tmp/spdk.sock 00:16:01.392 09:57:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63244 ']' 00:16:01.392 09:57:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:01.392 09:57:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:01.392 09:57:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:01.392 09:57:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:01.392 09:57:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:01.392 09:57:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:16:01.392 [2024-07-15 09:57:14.859389] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:16:01.392 [2024-07-15 09:57:14.859465] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63244 ] 00:16:01.651 [2024-07-15 09:57:14.998222] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:01.651 [2024-07-15 09:57:15.107928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.219 09:57:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:02.219 09:57:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:16:02.219 09:57:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=63272 00:16:02.219 09:57:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 63272 /var/tmp/spdk2.sock 00:16:02.219 09:57:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:16:02.219 09:57:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 63272 /var/tmp/spdk2.sock 00:16:02.219 09:57:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:16:02.219 09:57:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:16:02.219 09:57:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:02.219 09:57:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:16:02.219 09:57:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:02.219 09:57:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 63272 /var/tmp/spdk2.sock 00:16:02.219 09:57:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63272 ']' 00:16:02.219 09:57:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:16:02.219 09:57:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:02.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:16:02.219 09:57:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:16:02.219 09:57:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:02.219 09:57:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:02.219 [2024-07-15 09:57:15.788979] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:16:02.219 [2024-07-15 09:57:15.789058] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63272 ] 00:16:02.478 [2024-07-15 09:57:15.920736] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 63244 has claimed it. 00:16:02.478 [2024-07-15 09:57:15.920807] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:16:03.148 ERROR: process (pid: 63272) is no longer running 00:16:03.148 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (63272) - No such process 00:16:03.148 09:57:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:03.148 09:57:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:16:03.148 09:57:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:16:03.148 09:57:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:03.148 09:57:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:03.148 09:57:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:03.148 09:57:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 63244 00:16:03.148 09:57:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:16:03.148 09:57:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 63244 00:16:03.408 09:57:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 63244 00:16:03.408 09:57:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63244 ']' 00:16:03.408 09:57:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 63244 00:16:03.408 09:57:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:16:03.408 09:57:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:03.408 09:57:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63244 00:16:03.408 09:57:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:03.408 09:57:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:03.408 killing process with pid 63244 00:16:03.408 09:57:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63244' 00:16:03.408 09:57:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 63244 00:16:03.408 09:57:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 63244 00:16:03.667 00:16:03.667 real 0m2.378s 00:16:03.667 user 0m2.701s 00:16:03.667 sys 0m0.559s 00:16:03.667 09:57:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:03.667 09:57:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:03.667 ************************************ 00:16:03.667 END TEST locking_app_on_locked_coremask 00:16:03.667 ************************************ 00:16:03.667 09:57:17 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:16:03.667 09:57:17 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:16:03.667 09:57:17 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:03.667 09:57:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:03.667 09:57:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:16:03.667 ************************************ 00:16:03.667 START TEST locking_overlapped_coremask 00:16:03.667 ************************************ 00:16:03.667 09:57:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:16:03.667 09:57:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:16:03.667 09:57:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=63318 00:16:03.667 09:57:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 63318 /var/tmp/spdk.sock 00:16:03.667 09:57:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 63318 ']' 00:16:03.667 09:57:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:03.667 09:57:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:03.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:03.667 09:57:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:03.667 09:57:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:03.667 09:57:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:03.927 [2024-07-15 09:57:17.296298] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:16:03.927 [2024-07-15 09:57:17.296391] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63318 ] 00:16:03.927 [2024-07-15 09:57:17.424474] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:04.186 [2024-07-15 09:57:17.533334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:04.186 [2024-07-15 09:57:17.533514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.186 [2024-07-15 09:57:17.533516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:04.756 09:57:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:04.756 09:57:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:16:04.756 09:57:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:16:04.756 09:57:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=63348 00:16:04.756 09:57:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 63348 /var/tmp/spdk2.sock 00:16:04.756 09:57:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:16:04.756 09:57:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 63348 /var/tmp/spdk2.sock 00:16:04.756 09:57:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:16:04.756 09:57:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:04.756 09:57:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:16:04.756 09:57:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:04.756 09:57:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 63348 /var/tmp/spdk2.sock 00:16:04.756 09:57:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 63348 ']' 00:16:04.756 09:57:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:16:04.756 09:57:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:04.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:16:04.756 09:57:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:16:04.756 09:57:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:04.756 09:57:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:04.756 [2024-07-15 09:57:18.211704] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:16:04.756 [2024-07-15 09:57:18.211774] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63348 ] 00:16:05.016 [2024-07-15 09:57:18.342637] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 63318 has claimed it. 00:16:05.016 [2024-07-15 09:57:18.342709] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:16:05.584 ERROR: process (pid: 63348) is no longer running 00:16:05.584 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (63348) - No such process 00:16:05.584 09:57:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:05.584 09:57:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:16:05.584 09:57:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:16:05.584 09:57:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:05.584 09:57:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:05.584 09:57:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:05.584 09:57:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:16:05.584 09:57:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:16:05.584 09:57:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:16:05.584 09:57:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:16:05.584 09:57:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 63318 00:16:05.584 09:57:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 63318 ']' 00:16:05.584 09:57:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 63318 00:16:05.584 09:57:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:16:05.584 09:57:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:05.584 09:57:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63318 00:16:05.584 09:57:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:05.584 09:57:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:05.584 killing process with pid 63318 00:16:05.584 09:57:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63318' 00:16:05.584 09:57:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 63318 00:16:05.584 09:57:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 63318 00:16:05.843 00:16:05.843 real 0m2.014s 00:16:05.843 user 0m5.474s 00:16:05.843 sys 0m0.367s 00:16:05.843 09:57:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:05.843 09:57:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:05.843 ************************************ 00:16:05.843 END TEST locking_overlapped_coremask 00:16:05.843 ************************************ 00:16:05.843 09:57:19 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:16:05.843 09:57:19 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:16:05.843 09:57:19 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:05.843 09:57:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:05.843 09:57:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:16:05.843 ************************************ 00:16:05.843 START TEST locking_overlapped_coremask_via_rpc 00:16:05.843 ************************************ 00:16:05.843 09:57:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:16:05.843 09:57:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=63394 00:16:05.843 09:57:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:16:05.843 09:57:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 63394 /var/tmp/spdk.sock 00:16:05.843 09:57:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63394 ']' 00:16:05.843 09:57:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:05.843 09:57:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:05.843 09:57:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:05.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:05.843 09:57:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:05.843 09:57:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:05.843 [2024-07-15 09:57:19.379958] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:16:05.843 [2024-07-15 09:57:19.380032] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63394 ] 00:16:06.102 [2024-07-15 09:57:19.517606] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:16:06.102 [2024-07-15 09:57:19.517666] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:06.102 [2024-07-15 09:57:19.623532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:06.102 [2024-07-15 09:57:19.623602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:06.102 [2024-07-15 09:57:19.623606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.034 09:57:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:07.034 09:57:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:16:07.034 09:57:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=63424 00:16:07.034 09:57:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:16:07.034 09:57:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 63424 /var/tmp/spdk2.sock 00:16:07.034 09:57:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63424 ']' 00:16:07.034 09:57:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:16:07.034 09:57:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:07.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:16:07.034 09:57:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:16:07.034 09:57:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:07.034 09:57:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:07.034 [2024-07-15 09:57:20.323823] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:16:07.034 [2024-07-15 09:57:20.323902] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63424 ] 00:16:07.034 [2024-07-15 09:57:20.459872] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:16:07.034 [2024-07-15 09:57:20.459937] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:07.291 [2024-07-15 09:57:20.684804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:07.291 [2024-07-15 09:57:20.688877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:07.292 [2024-07-15 09:57:20.688881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:16:07.858 09:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:07.858 09:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:16:07.858 09:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:16:07.858 09:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.858 09:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:07.858 09:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.858 09:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:16:07.858 09:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:16:07.858 09:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:16:07.858 09:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:07.858 09:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:07.858 09:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:07.858 09:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:07.858 09:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:16:07.858 09:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.858 09:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:07.858 [2024-07-15 09:57:21.267804] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 63394 has claimed it. 00:16:07.858 2024/07/15 09:57:21 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:16:07.858 request: 00:16:07.858 { 00:16:07.858 "method": "framework_enable_cpumask_locks", 00:16:07.858 "params": {} 00:16:07.858 } 00:16:07.858 Got JSON-RPC error response 00:16:07.858 GoRPCClient: error on JSON-RPC call 00:16:07.858 09:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:07.858 09:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:16:07.858 09:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:07.858 09:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:07.858 09:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:07.858 09:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 63394 /var/tmp/spdk.sock 00:16:07.858 09:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63394 ']' 00:16:07.859 09:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:07.859 09:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:07.859 09:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:07.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:07.859 09:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:07.859 09:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:08.116 09:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:08.116 09:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:16:08.116 09:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 63424 /var/tmp/spdk2.sock 00:16:08.116 09:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63424 ']' 00:16:08.116 09:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:16:08.116 09:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:08.116 09:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:16:08.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:16:08.116 09:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:08.116 09:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:08.374 09:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:08.374 09:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:16:08.374 09:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:16:08.374 09:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:16:08.374 09:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:16:08.374 09:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:16:08.374 00:16:08.374 real 0m2.438s 00:16:08.374 user 0m1.178s 00:16:08.374 sys 0m0.203s 00:16:08.374 ************************************ 00:16:08.374 END TEST locking_overlapped_coremask_via_rpc 00:16:08.374 ************************************ 00:16:08.374 09:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:08.374 09:57:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:08.374 09:57:21 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:16:08.374 09:57:21 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:16:08.374 09:57:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 63394 ]] 00:16:08.374 09:57:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 63394 00:16:08.374 09:57:21 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 63394 ']' 00:16:08.374 09:57:21 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 63394 00:16:08.374 09:57:21 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:16:08.374 09:57:21 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:08.374 09:57:21 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63394 00:16:08.374 09:57:21 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:08.374 killing process with pid 63394 00:16:08.374 09:57:21 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:08.374 09:57:21 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63394' 00:16:08.374 09:57:21 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 63394 00:16:08.374 09:57:21 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 63394 00:16:08.631 09:57:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 63424 ]] 00:16:08.631 09:57:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 63424 00:16:08.631 09:57:22 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 63424 ']' 00:16:08.631 09:57:22 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 63424 00:16:08.631 09:57:22 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:16:08.631 09:57:22 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:08.631 09:57:22 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63424 00:16:08.631 killing process with pid 63424 00:16:08.631 09:57:22 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:08.631 09:57:22 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:08.631 09:57:22 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63424' 00:16:08.631 09:57:22 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 63424 00:16:08.631 09:57:22 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 63424 00:16:09.196 09:57:22 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:16:09.196 09:57:22 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:16:09.196 09:57:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 63394 ]] 00:16:09.196 09:57:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 63394 00:16:09.196 09:57:22 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 63394 ']' 00:16:09.196 09:57:22 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 63394 00:16:09.196 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (63394) - No such process 00:16:09.196 Process with pid 63394 is not found 00:16:09.196 09:57:22 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 63394 is not found' 00:16:09.196 09:57:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 63424 ]] 00:16:09.196 09:57:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 63424 00:16:09.196 09:57:22 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 63424 ']' 00:16:09.196 09:57:22 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 63424 00:16:09.196 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (63424) - No such process 00:16:09.196 Process with pid 63424 is not found 00:16:09.196 09:57:22 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 63424 is not found' 00:16:09.196 09:57:22 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:16:09.196 00:16:09.196 real 0m18.224s 00:16:09.196 user 0m31.594s 00:16:09.196 sys 0m4.885s 00:16:09.196 09:57:22 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:09.196 09:57:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:16:09.196 ************************************ 00:16:09.196 END TEST cpu_locks 00:16:09.196 ************************************ 00:16:09.196 09:57:22 event -- common/autotest_common.sh@1142 -- # return 0 00:16:09.196 00:16:09.196 real 0m46.483s 00:16:09.196 user 1m31.208s 00:16:09.196 sys 0m8.592s 00:16:09.196 09:57:22 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:09.196 09:57:22 event -- common/autotest_common.sh@10 -- # set +x 00:16:09.196 ************************************ 00:16:09.196 END TEST event 00:16:09.196 ************************************ 00:16:09.196 09:57:22 -- common/autotest_common.sh@1142 -- # return 0 00:16:09.196 09:57:22 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:16:09.196 09:57:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:09.196 09:57:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:09.196 09:57:22 -- common/autotest_common.sh@10 -- # set +x 00:16:09.196 ************************************ 00:16:09.196 START TEST thread 00:16:09.196 ************************************ 00:16:09.196 09:57:22 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:16:09.196 * Looking for test storage... 00:16:09.196 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:16:09.196 09:57:22 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:16:09.196 09:57:22 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:16:09.196 09:57:22 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:09.470 09:57:22 thread -- common/autotest_common.sh@10 -- # set +x 00:16:09.470 ************************************ 00:16:09.470 START TEST thread_poller_perf 00:16:09.470 ************************************ 00:16:09.470 09:57:22 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:16:09.470 [2024-07-15 09:57:22.816561] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:16:09.470 [2024-07-15 09:57:22.816702] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63565 ] 00:16:09.470 [2024-07-15 09:57:22.957704] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.728 [2024-07-15 09:57:23.072330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.728 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:16:10.661 ====================================== 00:16:10.661 busy:2301593306 (cyc) 00:16:10.661 total_run_count: 374000 00:16:10.661 tsc_hz: 2290000000 (cyc) 00:16:10.661 ====================================== 00:16:10.661 poller_cost: 6153 (cyc), 2686 (nsec) 00:16:10.661 00:16:10.661 real 0m1.376s 00:16:10.661 user 0m1.216s 00:16:10.661 sys 0m0.053s 00:16:10.661 09:57:24 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:10.661 09:57:24 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:16:10.661 ************************************ 00:16:10.661 END TEST thread_poller_perf 00:16:10.661 ************************************ 00:16:10.661 09:57:24 thread -- common/autotest_common.sh@1142 -- # return 0 00:16:10.661 09:57:24 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:16:10.661 09:57:24 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:16:10.661 09:57:24 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:10.661 09:57:24 thread -- common/autotest_common.sh@10 -- # set +x 00:16:10.661 ************************************ 00:16:10.661 START TEST thread_poller_perf 00:16:10.661 ************************************ 00:16:10.661 09:57:24 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:16:10.918 [2024-07-15 09:57:24.257277] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:16:10.918 [2024-07-15 09:57:24.257407] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63606 ] 00:16:10.918 [2024-07-15 09:57:24.400823] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:11.176 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:16:11.176 [2024-07-15 09:57:24.507114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.111 ====================================== 00:16:12.111 busy:2291881808 (cyc) 00:16:12.111 total_run_count: 4901000 00:16:12.111 tsc_hz: 2290000000 (cyc) 00:16:12.111 ====================================== 00:16:12.111 poller_cost: 467 (cyc), 203 (nsec) 00:16:12.111 00:16:12.111 real 0m1.357s 00:16:12.111 user 0m1.195s 00:16:12.111 sys 0m0.056s 00:16:12.111 09:57:25 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:12.111 ************************************ 00:16:12.111 END TEST thread_poller_perf 00:16:12.111 09:57:25 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:16:12.111 ************************************ 00:16:12.111 09:57:25 thread -- common/autotest_common.sh@1142 -- # return 0 00:16:12.111 09:57:25 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:16:12.111 00:16:12.111 real 0m2.976s 00:16:12.111 user 0m2.488s 00:16:12.111 sys 0m0.282s 00:16:12.111 09:57:25 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:12.111 09:57:25 thread -- common/autotest_common.sh@10 -- # set +x 00:16:12.111 ************************************ 00:16:12.111 END TEST thread 00:16:12.111 ************************************ 00:16:12.111 09:57:25 -- common/autotest_common.sh@1142 -- # return 0 00:16:12.111 09:57:25 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:16:12.111 09:57:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:12.111 09:57:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:12.111 09:57:25 -- common/autotest_common.sh@10 -- # set +x 00:16:12.370 ************************************ 00:16:12.370 START TEST accel 00:16:12.370 ************************************ 00:16:12.370 09:57:25 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:16:12.370 * Looking for test storage... 00:16:12.370 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:16:12.370 09:57:25 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:16:12.370 09:57:25 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:16:12.370 09:57:25 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:16:12.370 09:57:25 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=63675 00:16:12.370 09:57:25 accel -- accel/accel.sh@63 -- # waitforlisten 63675 00:16:12.370 09:57:25 accel -- common/autotest_common.sh@829 -- # '[' -z 63675 ']' 00:16:12.370 09:57:25 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:12.370 09:57:25 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:16:12.370 09:57:25 accel -- accel/accel.sh@61 -- # build_accel_config 00:16:12.370 09:57:25 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:12.370 09:57:25 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:12.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:12.370 09:57:25 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:12.370 09:57:25 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:12.370 09:57:25 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:12.370 09:57:25 accel -- common/autotest_common.sh@10 -- # set +x 00:16:12.370 09:57:25 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:12.370 09:57:25 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:12.370 09:57:25 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:12.370 09:57:25 accel -- accel/accel.sh@40 -- # local IFS=, 00:16:12.370 09:57:25 accel -- accel/accel.sh@41 -- # jq -r . 00:16:12.370 [2024-07-15 09:57:25.881114] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:16:12.370 [2024-07-15 09:57:25.881189] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63675 ] 00:16:12.628 [2024-07-15 09:57:26.005011] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:12.629 [2024-07-15 09:57:26.119054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:13.206 09:57:26 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:13.206 09:57:26 accel -- common/autotest_common.sh@862 -- # return 0 00:16:13.206 09:57:26 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:16:13.206 09:57:26 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:16:13.206 09:57:26 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:16:13.206 09:57:26 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:16:13.206 09:57:26 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:16:13.206 09:57:26 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:16:13.206 09:57:26 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.206 09:57:26 accel -- common/autotest_common.sh@10 -- # set +x 00:16:13.206 09:57:26 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:16:13.206 09:57:26 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.465 09:57:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:13.465 09:57:26 accel -- accel/accel.sh@72 -- # IFS== 00:16:13.465 09:57:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:16:13.465 09:57:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:13.465 09:57:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:13.465 09:57:26 accel -- accel/accel.sh@72 -- # IFS== 00:16:13.465 09:57:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:16:13.465 09:57:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:13.465 09:57:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:13.465 09:57:26 accel -- accel/accel.sh@72 -- # IFS== 00:16:13.465 09:57:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:16:13.465 09:57:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:13.465 09:57:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:13.465 09:57:26 accel -- accel/accel.sh@72 -- # IFS== 00:16:13.465 09:57:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:16:13.465 09:57:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:13.465 09:57:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:13.465 09:57:26 accel -- accel/accel.sh@72 -- # IFS== 00:16:13.465 09:57:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:16:13.466 09:57:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:13.466 09:57:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:13.466 09:57:26 accel -- accel/accel.sh@72 -- # IFS== 00:16:13.466 09:57:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:16:13.466 09:57:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:13.466 09:57:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:13.466 09:57:26 accel -- accel/accel.sh@72 -- # IFS== 00:16:13.466 09:57:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:16:13.466 09:57:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:13.466 09:57:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:13.466 09:57:26 accel -- accel/accel.sh@72 -- # IFS== 00:16:13.466 09:57:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:16:13.466 09:57:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:13.466 09:57:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:13.466 09:57:26 accel -- accel/accel.sh@72 -- # IFS== 00:16:13.466 09:57:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:16:13.466 09:57:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:13.466 09:57:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:13.466 09:57:26 accel -- accel/accel.sh@72 -- # IFS== 00:16:13.466 09:57:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:16:13.466 09:57:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:13.466 09:57:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:13.466 09:57:26 accel -- accel/accel.sh@72 -- # IFS== 00:16:13.466 09:57:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:16:13.466 09:57:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:13.466 09:57:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:13.466 09:57:26 accel -- accel/accel.sh@72 -- # IFS== 00:16:13.466 09:57:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:16:13.466 09:57:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:13.466 09:57:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:13.466 09:57:26 accel -- accel/accel.sh@72 -- # IFS== 00:16:13.466 09:57:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:16:13.466 09:57:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:13.466 09:57:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:13.466 09:57:26 accel -- accel/accel.sh@72 -- # IFS== 00:16:13.466 09:57:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:16:13.466 09:57:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:13.466 09:57:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:13.466 09:57:26 accel -- accel/accel.sh@72 -- # IFS== 00:16:13.466 09:57:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:16:13.466 09:57:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:13.466 09:57:26 accel -- accel/accel.sh@75 -- # killprocess 63675 00:16:13.466 09:57:26 accel -- common/autotest_common.sh@948 -- # '[' -z 63675 ']' 00:16:13.466 09:57:26 accel -- common/autotest_common.sh@952 -- # kill -0 63675 00:16:13.466 09:57:26 accel -- common/autotest_common.sh@953 -- # uname 00:16:13.466 09:57:26 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:13.466 09:57:26 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63675 00:16:13.466 09:57:26 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:13.466 09:57:26 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:13.466 09:57:26 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63675' 00:16:13.466 killing process with pid 63675 00:16:13.466 09:57:26 accel -- common/autotest_common.sh@967 -- # kill 63675 00:16:13.466 09:57:26 accel -- common/autotest_common.sh@972 -- # wait 63675 00:16:13.724 09:57:27 accel -- accel/accel.sh@76 -- # trap - ERR 00:16:13.724 09:57:27 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:16:13.724 09:57:27 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:13.724 09:57:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:13.724 09:57:27 accel -- common/autotest_common.sh@10 -- # set +x 00:16:13.724 09:57:27 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:16:13.724 09:57:27 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:16:13.724 09:57:27 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:16:13.724 09:57:27 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:13.724 09:57:27 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:13.724 09:57:27 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:13.724 09:57:27 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:13.724 09:57:27 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:13.724 09:57:27 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:16:13.724 09:57:27 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:16:13.724 09:57:27 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:13.725 09:57:27 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:16:13.725 09:57:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:13.725 09:57:27 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:16:13.725 09:57:27 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:16:13.725 09:57:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:13.725 09:57:27 accel -- common/autotest_common.sh@10 -- # set +x 00:16:13.725 ************************************ 00:16:13.725 START TEST accel_missing_filename 00:16:13.725 ************************************ 00:16:13.725 09:57:27 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:16:13.725 09:57:27 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:16:13.725 09:57:27 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:16:13.725 09:57:27 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:16:13.725 09:57:27 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:13.725 09:57:27 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:16:13.725 09:57:27 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:13.725 09:57:27 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:16:13.725 09:57:27 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:16:13.725 09:57:27 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:16:13.725 09:57:27 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:13.725 09:57:27 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:13.725 09:57:27 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:13.725 09:57:27 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:13.725 09:57:27 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:13.725 09:57:27 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:16:13.725 09:57:27 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:16:13.725 [2024-07-15 09:57:27.285002] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:16:13.725 [2024-07-15 09:57:27.285086] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63749 ] 00:16:13.985 [2024-07-15 09:57:27.422872] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:13.985 [2024-07-15 09:57:27.523424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:13.985 [2024-07-15 09:57:27.564337] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:14.244 [2024-07-15 09:57:27.623365] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:16:14.244 A filename is required. 00:16:14.244 09:57:27 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:16:14.244 09:57:27 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:14.244 ************************************ 00:16:14.244 END TEST accel_missing_filename 00:16:14.244 ************************************ 00:16:14.244 09:57:27 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:16:14.244 09:57:27 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:16:14.244 09:57:27 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:16:14.244 09:57:27 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:14.244 00:16:14.244 real 0m0.451s 00:16:14.244 user 0m0.293s 00:16:14.244 sys 0m0.096s 00:16:14.244 09:57:27 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:14.244 09:57:27 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:16:14.244 09:57:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:14.244 09:57:27 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:14.244 09:57:27 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:16:14.244 09:57:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:14.244 09:57:27 accel -- common/autotest_common.sh@10 -- # set +x 00:16:14.244 ************************************ 00:16:14.244 START TEST accel_compress_verify 00:16:14.244 ************************************ 00:16:14.244 09:57:27 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:14.244 09:57:27 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:16:14.244 09:57:27 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:14.244 09:57:27 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:16:14.244 09:57:27 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:14.244 09:57:27 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:16:14.244 09:57:27 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:14.244 09:57:27 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:14.244 09:57:27 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:16:14.244 09:57:27 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:14.244 09:57:27 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:14.244 09:57:27 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:14.244 09:57:27 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:14.244 09:57:27 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:14.244 09:57:27 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:14.244 09:57:27 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:16:14.244 09:57:27 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:16:14.244 [2024-07-15 09:57:27.789856] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:16:14.244 [2024-07-15 09:57:27.789935] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63769 ] 00:16:14.504 [2024-07-15 09:57:27.929954] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:14.504 [2024-07-15 09:57:28.036880] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.504 [2024-07-15 09:57:28.078956] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:14.763 [2024-07-15 09:57:28.139245] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:16:14.763 00:16:14.763 Compression does not support the verify option, aborting. 00:16:14.763 09:57:28 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:16:14.763 09:57:28 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:14.763 09:57:28 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:16:14.763 09:57:28 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:16:14.763 09:57:28 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:16:14.763 09:57:28 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:14.763 00:16:14.763 real 0m0.465s 00:16:14.763 user 0m0.298s 00:16:14.763 sys 0m0.105s 00:16:14.763 09:57:28 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:14.763 09:57:28 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:16:14.763 ************************************ 00:16:14.763 END TEST accel_compress_verify 00:16:14.763 ************************************ 00:16:14.763 09:57:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:14.763 09:57:28 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:16:14.763 09:57:28 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:16:14.763 09:57:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:14.763 09:57:28 accel -- common/autotest_common.sh@10 -- # set +x 00:16:14.763 ************************************ 00:16:14.763 START TEST accel_wrong_workload 00:16:14.763 ************************************ 00:16:14.763 09:57:28 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:16:14.763 09:57:28 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:16:14.763 09:57:28 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:16:14.763 09:57:28 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:16:14.763 09:57:28 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:14.763 09:57:28 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:16:14.763 09:57:28 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:14.763 09:57:28 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:16:14.763 09:57:28 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:16:14.763 09:57:28 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:16:14.763 09:57:28 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:14.763 09:57:28 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:14.763 09:57:28 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:14.763 09:57:28 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:14.763 09:57:28 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:14.763 09:57:28 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:16:14.763 09:57:28 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:16:14.763 Unsupported workload type: foobar 00:16:14.763 [2024-07-15 09:57:28.305198] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:16:14.763 accel_perf options: 00:16:14.763 [-h help message] 00:16:14.763 [-q queue depth per core] 00:16:14.763 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:16:14.763 [-T number of threads per core 00:16:14.763 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:16:14.763 [-t time in seconds] 00:16:14.763 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:16:14.763 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:16:14.763 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:16:14.763 [-l for compress/decompress workloads, name of uncompressed input file 00:16:14.763 [-S for crc32c workload, use this seed value (default 0) 00:16:14.763 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:16:14.763 [-f for fill workload, use this BYTE value (default 255) 00:16:14.763 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:16:14.763 [-y verify result if this switch is on] 00:16:14.763 [-a tasks to allocate per core (default: same value as -q)] 00:16:14.763 Can be used to spread operations across a wider range of memory. 00:16:14.763 09:57:28 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:16:14.763 09:57:28 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:14.763 09:57:28 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:14.763 09:57:28 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:14.763 00:16:14.763 real 0m0.040s 00:16:14.763 user 0m0.023s 00:16:14.763 sys 0m0.017s 00:16:14.763 09:57:28 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:14.763 09:57:28 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:16:14.763 ************************************ 00:16:14.763 END TEST accel_wrong_workload 00:16:14.763 ************************************ 00:16:15.023 09:57:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:15.023 09:57:28 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:16:15.023 09:57:28 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:16:15.023 09:57:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:15.023 09:57:28 accel -- common/autotest_common.sh@10 -- # set +x 00:16:15.023 ************************************ 00:16:15.023 START TEST accel_negative_buffers 00:16:15.023 ************************************ 00:16:15.023 09:57:28 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:16:15.023 09:57:28 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:16:15.023 09:57:28 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:16:15.023 09:57:28 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:16:15.023 09:57:28 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:15.023 09:57:28 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:16:15.023 09:57:28 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:15.023 09:57:28 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:16:15.023 09:57:28 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:16:15.023 09:57:28 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:16:15.023 09:57:28 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:15.023 09:57:28 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:15.023 09:57:28 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:15.023 09:57:28 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:15.023 09:57:28 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:15.023 09:57:28 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:16:15.023 09:57:28 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:16:15.023 -x option must be non-negative. 00:16:15.023 [2024-07-15 09:57:28.397263] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:16:15.023 accel_perf options: 00:16:15.023 [-h help message] 00:16:15.023 [-q queue depth per core] 00:16:15.023 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:16:15.023 [-T number of threads per core 00:16:15.023 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:16:15.023 [-t time in seconds] 00:16:15.023 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:16:15.023 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:16:15.023 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:16:15.023 [-l for compress/decompress workloads, name of uncompressed input file 00:16:15.023 [-S for crc32c workload, use this seed value (default 0) 00:16:15.023 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:16:15.023 [-f for fill workload, use this BYTE value (default 255) 00:16:15.023 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:16:15.023 [-y verify result if this switch is on] 00:16:15.023 [-a tasks to allocate per core (default: same value as -q)] 00:16:15.023 Can be used to spread operations across a wider range of memory. 00:16:15.023 09:57:28 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:16:15.023 09:57:28 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:15.023 09:57:28 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:15.023 09:57:28 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:15.023 00:16:15.023 real 0m0.040s 00:16:15.023 user 0m0.021s 00:16:15.023 sys 0m0.018s 00:16:15.023 09:57:28 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:15.023 09:57:28 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:16:15.023 ************************************ 00:16:15.023 END TEST accel_negative_buffers 00:16:15.023 ************************************ 00:16:15.023 09:57:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:15.023 09:57:28 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:16:15.023 09:57:28 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:16:15.023 09:57:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:15.023 09:57:28 accel -- common/autotest_common.sh@10 -- # set +x 00:16:15.023 ************************************ 00:16:15.023 START TEST accel_crc32c 00:16:15.023 ************************************ 00:16:15.023 09:57:28 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:16:15.023 09:57:28 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:16:15.023 09:57:28 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:16:15.023 09:57:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:15.023 09:57:28 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:16:15.023 09:57:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:15.023 09:57:28 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:16:15.023 09:57:28 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:16:15.023 09:57:28 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:15.023 09:57:28 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:15.023 09:57:28 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:15.023 09:57:28 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:15.023 09:57:28 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:15.023 09:57:28 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:16:15.023 09:57:28 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:16:15.023 [2024-07-15 09:57:28.483269] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:16:15.023 [2024-07-15 09:57:28.483363] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63833 ] 00:16:15.283 [2024-07-15 09:57:28.622408] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.283 [2024-07-15 09:57:28.728006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.283 09:57:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:16:15.283 09:57:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:15.283 09:57:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:15.283 09:57:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:15.283 09:57:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:16:15.283 09:57:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:15.283 09:57:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:15.283 09:57:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:15.283 09:57:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:16:15.283 09:57:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:15.283 09:57:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:15.283 09:57:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:15.283 09:57:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:16:15.283 09:57:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:15.283 09:57:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:15.283 09:57:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:15.283 09:57:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:16:15.283 09:57:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:15.283 09:57:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:15.283 09:57:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:15.283 09:57:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:16:15.283 09:57:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:15.283 09:57:28 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:16:15.283 09:57:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:15.283 09:57:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:15.283 09:57:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:16:15.283 09:57:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:15.283 09:57:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:15.283 09:57:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:15.283 09:57:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:15.283 09:57:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:15.283 09:57:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:15.283 09:57:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:15.283 09:57:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:16:15.283 09:57:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:15.283 09:57:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:15.283 09:57:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:15.283 09:57:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:16:15.283 09:57:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:15.283 09:57:28 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:16:15.283 09:57:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:15.283 09:57:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:15.283 09:57:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:16:15.283 09:57:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:15.283 09:57:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:15.283 09:57:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:15.283 09:57:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:16:15.284 09:57:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:15.284 09:57:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:15.284 09:57:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:15.284 09:57:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:16:15.284 09:57:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:15.284 09:57:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:15.284 09:57:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:15.284 09:57:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:16:15.284 09:57:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:15.284 09:57:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:15.284 09:57:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:15.284 09:57:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:16:15.284 09:57:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:15.284 09:57:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:15.284 09:57:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:15.284 09:57:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:16:15.284 09:57:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:15.284 09:57:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:15.284 09:57:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:15.284 09:57:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:16:15.284 09:57:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:15.284 09:57:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:15.284 09:57:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:16.665 09:57:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:16:16.665 09:57:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:16.665 09:57:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:16.665 09:57:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:16.665 09:57:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:16:16.665 09:57:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:16.665 09:57:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:16.665 09:57:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:16.665 09:57:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:16:16.665 09:57:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:16.665 09:57:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:16.665 09:57:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:16.665 09:57:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:16:16.665 09:57:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:16.665 09:57:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:16.665 09:57:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:16.665 09:57:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:16:16.665 09:57:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:16.665 09:57:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:16.665 09:57:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:16.665 09:57:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:16:16.665 09:57:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:16.665 09:57:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:16.665 09:57:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:16.665 09:57:29 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:16.665 09:57:29 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:16:16.665 09:57:29 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:16.665 00:16:16.665 real 0m1.456s 00:16:16.665 user 0m1.278s 00:16:16.665 sys 0m0.094s 00:16:16.665 09:57:29 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:16.665 09:57:29 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:16:16.665 ************************************ 00:16:16.665 END TEST accel_crc32c 00:16:16.665 ************************************ 00:16:16.665 09:57:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:16.665 09:57:29 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:16:16.665 09:57:29 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:16:16.665 09:57:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:16.665 09:57:29 accel -- common/autotest_common.sh@10 -- # set +x 00:16:16.665 ************************************ 00:16:16.666 START TEST accel_crc32c_C2 00:16:16.666 ************************************ 00:16:16.666 09:57:29 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:16:16.666 09:57:29 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:16:16.666 09:57:29 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:16:16.666 09:57:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:16.666 09:57:29 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:16:16.666 09:57:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:16.666 09:57:29 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:16:16.666 09:57:29 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:16:16.666 09:57:29 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:16.666 09:57:29 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:16.666 09:57:29 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:16.666 09:57:29 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:16.666 09:57:29 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:16.666 09:57:29 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:16:16.666 09:57:29 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:16:16.666 [2024-07-15 09:57:30.000929] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:16:16.666 [2024-07-15 09:57:30.001017] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63862 ] 00:16:16.666 [2024-07-15 09:57:30.134763] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:16.666 [2024-07-15 09:57:30.238145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.926 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:16.926 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.926 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:16.926 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:16.926 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:16.926 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.926 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:16.926 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:16.926 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:16:16.926 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.926 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:16.926 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:16.926 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:16.926 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.926 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:16.926 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:16.926 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:16.926 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.926 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:16.926 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:16.926 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:16:16.926 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.926 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:16:16.926 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:16.926 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:16.926 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:16:16.926 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.926 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:16.926 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:16.926 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:16.926 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.926 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:16.926 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:16.926 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:16.926 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.926 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:16.926 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:16.926 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:16:16.926 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.926 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:16:16.926 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:16.926 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:16.926 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:16:16.926 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.926 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:16.926 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:16.926 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:16:16.926 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.926 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:16.926 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:16.926 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:16:16.926 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.927 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:16.927 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:16.927 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:16.927 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.927 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:16.927 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:16.927 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:16:16.927 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.927 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:16.927 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:16.927 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:16.927 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.927 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:16.927 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:16.927 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:16.927 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.927 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:16.927 09:57:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:17.866 09:57:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:17.866 09:57:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:17.866 09:57:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:17.866 09:57:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:17.866 09:57:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:17.866 09:57:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:17.866 09:57:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:17.866 09:57:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:17.866 09:57:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:17.866 09:57:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:17.866 09:57:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:17.866 09:57:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:17.866 09:57:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:17.866 09:57:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:17.866 09:57:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:17.866 09:57:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:17.866 09:57:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:17.866 09:57:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:17.866 09:57:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:17.866 09:57:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:17.866 09:57:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:17.866 09:57:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:17.866 09:57:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:17.866 09:57:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:17.866 09:57:31 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:17.866 09:57:31 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:16:17.866 09:57:31 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:17.866 00:16:17.866 real 0m1.454s 00:16:17.866 user 0m1.269s 00:16:17.866 sys 0m0.101s 00:16:17.866 09:57:31 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:17.866 09:57:31 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:16:17.866 ************************************ 00:16:17.866 END TEST accel_crc32c_C2 00:16:17.866 ************************************ 00:16:18.126 09:57:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:18.126 09:57:31 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:16:18.126 09:57:31 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:16:18.126 09:57:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:18.126 09:57:31 accel -- common/autotest_common.sh@10 -- # set +x 00:16:18.126 ************************************ 00:16:18.126 START TEST accel_copy 00:16:18.126 ************************************ 00:16:18.126 09:57:31 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:16:18.126 09:57:31 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:16:18.126 09:57:31 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:16:18.126 09:57:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:18.126 09:57:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:18.126 09:57:31 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:16:18.126 09:57:31 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:16:18.126 09:57:31 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:18.126 09:57:31 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:16:18.126 09:57:31 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:18.126 09:57:31 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:18.126 09:57:31 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:18.126 09:57:31 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:18.126 09:57:31 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:16:18.126 09:57:31 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:16:18.126 [2024-07-15 09:57:31.513502] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:16:18.126 [2024-07-15 09:57:31.513597] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63898 ] 00:16:18.126 [2024-07-15 09:57:31.655369] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:18.386 [2024-07-15 09:57:31.755831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:18.386 09:57:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:19.767 09:57:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:16:19.767 09:57:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:19.767 09:57:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:19.767 09:57:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:19.767 09:57:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:16:19.767 09:57:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:19.767 09:57:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:19.767 09:57:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:19.767 09:57:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:16:19.767 09:57:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:19.767 09:57:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:19.767 09:57:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:19.767 09:57:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:16:19.767 09:57:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:19.767 09:57:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:19.767 09:57:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:19.767 09:57:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:16:19.767 09:57:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:19.767 09:57:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:19.767 09:57:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:19.767 09:57:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:16:19.767 09:57:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:19.767 09:57:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:19.767 09:57:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:19.767 09:57:32 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:19.767 09:57:32 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:16:19.767 09:57:32 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:19.767 00:16:19.767 real 0m1.461s 00:16:19.767 user 0m1.277s 00:16:19.767 sys 0m0.095s 00:16:19.767 09:57:32 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:19.767 09:57:32 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:16:19.767 ************************************ 00:16:19.767 END TEST accel_copy 00:16:19.767 ************************************ 00:16:19.767 09:57:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:19.767 09:57:32 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:16:19.767 09:57:32 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:16:19.767 09:57:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:19.767 09:57:32 accel -- common/autotest_common.sh@10 -- # set +x 00:16:19.767 ************************************ 00:16:19.767 START TEST accel_fill 00:16:19.767 ************************************ 00:16:19.767 09:57:32 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:16:19.767 09:57:32 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:16:19.767 09:57:32 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:16:19.767 09:57:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:19.767 09:57:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:19.767 09:57:32 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:16:19.767 09:57:32 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:16:19.767 09:57:32 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:16:19.767 09:57:32 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:19.767 09:57:32 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:19.767 09:57:32 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:19.767 09:57:32 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:19.767 09:57:32 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:19.767 09:57:32 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:16:19.767 09:57:32 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:16:19.767 [2024-07-15 09:57:33.021460] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:16:19.767 [2024-07-15 09:57:33.021547] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63931 ] 00:16:19.767 [2024-07-15 09:57:33.160878] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.767 [2024-07-15 09:57:33.265643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.767 09:57:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:16:19.767 09:57:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:19.767 09:57:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:19.767 09:57:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:19.767 09:57:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:16:19.767 09:57:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:19.767 09:57:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:19.767 09:57:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:19.767 09:57:33 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:16:19.767 09:57:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:19.767 09:57:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:19.767 09:57:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:19.767 09:57:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:16:19.767 09:57:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:19.767 09:57:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:19.767 09:57:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:19.767 09:57:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:16:19.767 09:57:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:19.767 09:57:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:19.767 09:57:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:19.767 09:57:33 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:16:19.767 09:57:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:19.767 09:57:33 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:16:19.767 09:57:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:19.767 09:57:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:19.767 09:57:33 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:16:19.767 09:57:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:19.767 09:57:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:19.767 09:57:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:19.767 09:57:33 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:19.767 09:57:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:19.767 09:57:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:19.767 09:57:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:19.767 09:57:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:16:19.767 09:57:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:19.767 09:57:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:19.767 09:57:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:19.767 09:57:33 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:16:19.767 09:57:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:19.767 09:57:33 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:16:19.767 09:57:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:19.767 09:57:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:19.767 09:57:33 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:16:19.767 09:57:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:19.767 09:57:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:19.768 09:57:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:19.768 09:57:33 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:16:19.768 09:57:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:19.768 09:57:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:19.768 09:57:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:19.768 09:57:33 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:16:19.768 09:57:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:19.768 09:57:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:19.768 09:57:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:19.768 09:57:33 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:16:19.768 09:57:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:19.768 09:57:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:19.768 09:57:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:19.768 09:57:33 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:16:19.768 09:57:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:19.768 09:57:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:19.768 09:57:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:19.768 09:57:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:16:19.768 09:57:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:19.768 09:57:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:19.768 09:57:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:19.768 09:57:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:16:19.768 09:57:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:19.768 09:57:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:19.768 09:57:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:21.144 09:57:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:16:21.144 09:57:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:21.144 09:57:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:21.144 09:57:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:21.144 09:57:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:16:21.144 09:57:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:21.144 09:57:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:21.144 09:57:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:21.144 09:57:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:16:21.144 09:57:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:21.144 09:57:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:21.144 09:57:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:21.144 09:57:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:16:21.144 09:57:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:21.144 09:57:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:21.144 09:57:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:21.144 09:57:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:16:21.144 09:57:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:21.144 09:57:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:21.144 09:57:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:21.144 09:57:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:16:21.144 09:57:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:21.144 09:57:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:21.144 09:57:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:21.144 09:57:34 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:21.144 09:57:34 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:16:21.144 09:57:34 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:21.144 00:16:21.144 real 0m1.461s 00:16:21.144 user 0m1.275s 00:16:21.144 sys 0m0.091s 00:16:21.144 09:57:34 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:21.144 09:57:34 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:16:21.144 ************************************ 00:16:21.144 END TEST accel_fill 00:16:21.144 ************************************ 00:16:21.144 09:57:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:21.144 09:57:34 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:16:21.144 09:57:34 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:16:21.144 09:57:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:21.144 09:57:34 accel -- common/autotest_common.sh@10 -- # set +x 00:16:21.144 ************************************ 00:16:21.144 START TEST accel_copy_crc32c 00:16:21.144 ************************************ 00:16:21.144 09:57:34 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:16:21.144 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:16:21.144 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:16:21.144 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:21.144 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:16:21.144 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:21.144 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:16:21.144 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:16:21.144 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:21.144 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:21.144 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:21.144 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:21.144 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:21.144 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:16:21.144 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:16:21.144 [2024-07-15 09:57:34.541195] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:16:21.144 [2024-07-15 09:57:34.541286] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63966 ] 00:16:21.144 [2024-07-15 09:57:34.680220] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.403 [2024-07-15 09:57:34.789394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:21.403 09:57:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:22.780 09:57:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:16:22.780 09:57:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:22.780 09:57:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:22.780 09:57:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:22.780 09:57:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:16:22.780 09:57:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:22.780 09:57:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:22.780 09:57:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:22.780 09:57:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:16:22.780 09:57:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:22.780 09:57:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:22.780 09:57:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:22.780 09:57:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:16:22.780 09:57:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:22.780 09:57:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:22.780 09:57:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:22.780 09:57:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:16:22.780 09:57:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:22.780 09:57:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:22.780 09:57:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:22.780 09:57:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:16:22.780 09:57:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:22.780 09:57:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:22.780 09:57:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:22.780 09:57:35 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:22.780 09:57:35 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:16:22.780 09:57:35 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:22.780 00:16:22.780 real 0m1.475s 00:16:22.780 user 0m1.284s 00:16:22.780 sys 0m0.096s 00:16:22.780 09:57:35 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:22.780 09:57:35 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:16:22.780 ************************************ 00:16:22.780 END TEST accel_copy_crc32c 00:16:22.780 ************************************ 00:16:22.780 09:57:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:22.780 09:57:36 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:16:22.780 09:57:36 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:16:22.780 09:57:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:22.780 09:57:36 accel -- common/autotest_common.sh@10 -- # set +x 00:16:22.780 ************************************ 00:16:22.780 START TEST accel_copy_crc32c_C2 00:16:22.780 ************************************ 00:16:22.780 09:57:36 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:16:22.780 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:16:22.780 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:16:22.780 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:16:22.780 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:22.780 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:22.780 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:16:22.780 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:16:22.780 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:22.780 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:22.780 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:22.780 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:22.780 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:22.780 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:16:22.780 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:16:22.780 [2024-07-15 09:57:36.060380] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:16:22.780 [2024-07-15 09:57:36.060487] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64000 ] 00:16:22.780 [2024-07-15 09:57:36.224226] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.780 [2024-07-15 09:57:36.340496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.040 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:23.040 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:23.040 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:23.040 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:23.040 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:23.040 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:23.040 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:23.040 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:23.040 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:16:23.040 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:23.040 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:23.040 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:23.040 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:23.040 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:23.040 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:23.040 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:23.040 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:23.040 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:23.040 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:23.040 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:23.040 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:16:23.040 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:23.040 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:16:23.040 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:23.040 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:23.040 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:16:23.040 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:23.040 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:23.040 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:23.040 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:23.040 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:23.040 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:23.040 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:23.040 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:16:23.040 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:23.040 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:23.040 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:23.040 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:23.040 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:23.040 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:23.040 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:23.040 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:16:23.040 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:23.040 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:16:23.040 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:23.040 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:23.040 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:16:23.040 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:23.040 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:23.040 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:23.040 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:16:23.040 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:23.040 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:23.040 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:23.040 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:16:23.040 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:23.040 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:23.040 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:23.040 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:23.040 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:23.041 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:23.041 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:23.041 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:16:23.041 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:23.041 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:23.041 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:23.041 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:23.041 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:23.041 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:23.041 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:23.041 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:23.041 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:23.041 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:23.041 09:57:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:24.000 09:57:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:24.000 09:57:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:24.000 09:57:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:24.000 09:57:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:24.000 09:57:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:24.000 09:57:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:24.000 09:57:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:24.000 09:57:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:24.000 09:57:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:24.001 09:57:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:24.001 09:57:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:24.001 09:57:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:24.001 09:57:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:24.001 09:57:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:24.001 09:57:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:24.001 09:57:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:24.001 09:57:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:24.001 09:57:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:24.001 09:57:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:24.001 09:57:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:24.001 09:57:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:24.001 09:57:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:24.001 09:57:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:24.001 09:57:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:24.001 09:57:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:24.001 09:57:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:16:24.001 09:57:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:24.001 00:16:24.001 real 0m1.496s 00:16:24.001 user 0m1.301s 00:16:24.001 sys 0m0.100s 00:16:24.001 09:57:37 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:24.001 09:57:37 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:16:24.001 ************************************ 00:16:24.001 END TEST accel_copy_crc32c_C2 00:16:24.001 ************************************ 00:16:24.261 09:57:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:24.261 09:57:37 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:16:24.261 09:57:37 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:16:24.261 09:57:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:24.261 09:57:37 accel -- common/autotest_common.sh@10 -- # set +x 00:16:24.261 ************************************ 00:16:24.261 START TEST accel_dualcast 00:16:24.261 ************************************ 00:16:24.261 09:57:37 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:16:24.261 09:57:37 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:16:24.261 09:57:37 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:16:24.261 09:57:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:24.261 09:57:37 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:16:24.261 09:57:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:24.261 09:57:37 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:16:24.261 09:57:37 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:16:24.261 09:57:37 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:24.261 09:57:37 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:24.261 09:57:37 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:24.261 09:57:37 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:24.261 09:57:37 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:24.261 09:57:37 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:16:24.261 09:57:37 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:16:24.261 [2024-07-15 09:57:37.628436] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:16:24.261 [2024-07-15 09:57:37.628534] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64035 ] 00:16:24.261 [2024-07-15 09:57:37.768334] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.520 [2024-07-15 09:57:37.876826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:24.520 09:57:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:25.900 09:57:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:16:25.900 09:57:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:25.900 09:57:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:25.900 09:57:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:25.900 09:57:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:16:25.900 09:57:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:25.900 09:57:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:25.900 09:57:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:25.900 09:57:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:16:25.900 09:57:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:25.900 09:57:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:25.900 09:57:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:25.900 09:57:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:16:25.900 09:57:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:25.900 09:57:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:25.900 09:57:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:25.900 09:57:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:16:25.900 09:57:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:25.900 09:57:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:25.900 09:57:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:25.900 09:57:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:16:25.900 09:57:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:25.900 09:57:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:25.900 09:57:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:25.900 09:57:39 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:25.900 09:57:39 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:16:25.900 09:57:39 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:25.900 00:16:25.900 real 0m1.474s 00:16:25.900 user 0m1.287s 00:16:25.900 sys 0m0.102s 00:16:25.901 09:57:39 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:25.901 09:57:39 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:16:25.901 ************************************ 00:16:25.901 END TEST accel_dualcast 00:16:25.901 ************************************ 00:16:25.901 09:57:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:25.901 09:57:39 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:16:25.901 09:57:39 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:16:25.901 09:57:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:25.901 09:57:39 accel -- common/autotest_common.sh@10 -- # set +x 00:16:25.901 ************************************ 00:16:25.901 START TEST accel_compare 00:16:25.901 ************************************ 00:16:25.901 09:57:39 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:16:25.901 [2024-07-15 09:57:39.160067] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:16:25.901 [2024-07-15 09:57:39.160163] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64069 ] 00:16:25.901 [2024-07-15 09:57:39.298697] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.901 [2024-07-15 09:57:39.405225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:25.901 09:57:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:27.319 09:57:40 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:16:27.319 09:57:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:27.319 09:57:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:27.319 09:57:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:27.319 09:57:40 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:16:27.319 09:57:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:27.319 09:57:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:27.319 09:57:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:27.319 09:57:40 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:16:27.319 09:57:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:27.319 09:57:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:27.319 09:57:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:27.319 09:57:40 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:16:27.319 09:57:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:27.319 09:57:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:27.319 09:57:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:27.319 09:57:40 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:16:27.319 09:57:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:27.319 09:57:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:27.319 09:57:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:27.319 09:57:40 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:16:27.319 09:57:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:27.319 09:57:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:27.319 09:57:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:27.319 09:57:40 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:27.319 09:57:40 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:16:27.319 09:57:40 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:27.319 00:16:27.319 real 0m1.483s 00:16:27.319 user 0m1.300s 00:16:27.319 sys 0m0.096s 00:16:27.319 09:57:40 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:27.319 09:57:40 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:16:27.319 ************************************ 00:16:27.319 END TEST accel_compare 00:16:27.319 ************************************ 00:16:27.319 09:57:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:27.319 09:57:40 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:16:27.319 09:57:40 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:16:27.319 09:57:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:27.319 09:57:40 accel -- common/autotest_common.sh@10 -- # set +x 00:16:27.319 ************************************ 00:16:27.319 START TEST accel_xor 00:16:27.319 ************************************ 00:16:27.319 09:57:40 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:16:27.319 09:57:40 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:16:27.319 09:57:40 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:16:27.319 09:57:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:27.319 09:57:40 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:16:27.319 09:57:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:27.319 09:57:40 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:16:27.319 09:57:40 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:16:27.319 09:57:40 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:27.319 09:57:40 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:27.319 09:57:40 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:27.319 09:57:40 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:27.319 09:57:40 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:27.319 09:57:40 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:16:27.319 09:57:40 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:16:27.319 [2024-07-15 09:57:40.708859] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:16:27.319 [2024-07-15 09:57:40.708958] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64104 ] 00:16:27.319 [2024-07-15 09:57:40.838246] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.578 [2024-07-15 09:57:40.946821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.578 09:57:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:27.578 09:57:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:27.578 09:57:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:27.578 09:57:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:27.578 09:57:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:27.578 09:57:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:27.578 09:57:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:27.578 09:57:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:27.578 09:57:40 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:16:27.578 09:57:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:27.578 09:57:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:27.578 09:57:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:27.578 09:57:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:27.578 09:57:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:27.578 09:57:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:27.578 09:57:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:27.578 09:57:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:27.578 09:57:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:27.578 09:57:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:27.578 09:57:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:27.578 09:57:40 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:16:27.578 09:57:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:27.578 09:57:40 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:16:27.578 09:57:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:27.578 09:57:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:27.578 09:57:40 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:16:27.578 09:57:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:27.578 09:57:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:27.578 09:57:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:27.578 09:57:40 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:27.578 09:57:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:27.578 09:57:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:27.578 09:57:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:27.578 09:57:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:27.578 09:57:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:27.578 09:57:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:27.578 09:57:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:27.578 09:57:41 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:16:27.578 09:57:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:27.578 09:57:41 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:16:27.578 09:57:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:27.578 09:57:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:27.578 09:57:41 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:16:27.578 09:57:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:27.578 09:57:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:27.578 09:57:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:27.578 09:57:41 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:16:27.578 09:57:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:27.578 09:57:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:27.578 09:57:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:27.578 09:57:41 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:16:27.578 09:57:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:27.578 09:57:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:27.578 09:57:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:27.578 09:57:41 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:16:27.578 09:57:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:27.578 09:57:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:27.578 09:57:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:27.578 09:57:41 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:16:27.578 09:57:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:27.578 09:57:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:27.578 09:57:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:27.578 09:57:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:27.578 09:57:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:27.578 09:57:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:27.578 09:57:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:27.578 09:57:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:27.578 09:57:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:27.578 09:57:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:27.578 09:57:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:28.954 09:57:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:28.954 09:57:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:28.954 09:57:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:28.954 09:57:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:28.954 09:57:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:28.954 09:57:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:28.954 09:57:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:28.954 09:57:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:28.954 09:57:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:28.954 09:57:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:28.954 09:57:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:28.954 09:57:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:28.954 09:57:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:28.954 09:57:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:28.954 09:57:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:28.954 09:57:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:28.954 09:57:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:28.954 09:57:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:28.954 09:57:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:28.954 09:57:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:28.954 09:57:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:28.954 09:57:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:28.954 09:57:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:28.954 09:57:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:28.954 09:57:42 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:28.954 09:57:42 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:16:28.954 09:57:42 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:28.954 00:16:28.954 real 0m1.471s 00:16:28.954 user 0m1.282s 00:16:28.954 sys 0m0.102s 00:16:28.954 09:57:42 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:28.954 09:57:42 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:16:28.954 ************************************ 00:16:28.954 END TEST accel_xor 00:16:28.954 ************************************ 00:16:28.954 09:57:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:28.954 09:57:42 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:16:28.954 09:57:42 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:16:28.954 09:57:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:28.954 09:57:42 accel -- common/autotest_common.sh@10 -- # set +x 00:16:28.954 ************************************ 00:16:28.954 START TEST accel_xor 00:16:28.954 ************************************ 00:16:28.954 09:57:42 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:16:28.954 09:57:42 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:16:28.954 09:57:42 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:16:28.954 09:57:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:28.954 09:57:42 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:16:28.955 [2024-07-15 09:57:42.225270] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:16:28.955 [2024-07-15 09:57:42.225366] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64138 ] 00:16:28.955 [2024-07-15 09:57:42.361303] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.955 [2024-07-15 09:57:42.472507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:28.955 09:57:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:30.328 09:57:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:30.328 09:57:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:30.328 09:57:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:30.328 09:57:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:30.328 09:57:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:30.328 09:57:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:30.328 09:57:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:30.328 09:57:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:30.328 09:57:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:30.328 09:57:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:30.328 09:57:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:30.328 09:57:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:30.328 09:57:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:30.328 09:57:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:30.328 09:57:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:30.328 09:57:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:30.328 09:57:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:30.328 09:57:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:30.328 09:57:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:30.328 09:57:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:30.328 09:57:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:30.328 09:57:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:30.328 09:57:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:30.328 09:57:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:30.328 09:57:43 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:30.328 09:57:43 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:16:30.328 09:57:43 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:30.328 00:16:30.328 real 0m1.477s 00:16:30.328 user 0m1.289s 00:16:30.328 sys 0m0.099s 00:16:30.328 09:57:43 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:30.328 09:57:43 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:16:30.329 ************************************ 00:16:30.329 END TEST accel_xor 00:16:30.329 ************************************ 00:16:30.329 09:57:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:30.329 09:57:43 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:16:30.329 09:57:43 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:16:30.329 09:57:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:30.329 09:57:43 accel -- common/autotest_common.sh@10 -- # set +x 00:16:30.329 ************************************ 00:16:30.329 START TEST accel_dif_verify 00:16:30.329 ************************************ 00:16:30.329 09:57:43 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:16:30.329 09:57:43 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:16:30.329 09:57:43 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:16:30.329 09:57:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:30.329 09:57:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:30.329 09:57:43 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:16:30.329 09:57:43 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:16:30.329 09:57:43 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:16:30.329 09:57:43 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:30.329 09:57:43 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:30.329 09:57:43 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:30.329 09:57:43 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:30.329 09:57:43 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:30.329 09:57:43 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:16:30.329 09:57:43 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:16:30.329 [2024-07-15 09:57:43.762618] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:16:30.329 [2024-07-15 09:57:43.762746] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64173 ] 00:16:30.329 [2024-07-15 09:57:43.901707] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.587 [2024-07-15 09:57:44.010047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:30.587 09:57:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:31.959 09:57:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:16:31.959 09:57:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:31.959 09:57:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:31.959 09:57:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:31.959 09:57:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:16:31.959 09:57:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:31.959 09:57:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:31.959 09:57:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:31.959 09:57:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:16:31.959 09:57:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:31.959 09:57:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:31.959 09:57:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:31.959 09:57:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:16:31.959 09:57:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:31.959 09:57:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:31.959 09:57:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:31.959 09:57:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:16:31.959 09:57:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:31.959 09:57:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:31.959 09:57:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:31.959 09:57:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:16:31.959 09:57:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:31.959 09:57:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:31.959 09:57:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:31.959 09:57:45 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:31.959 09:57:45 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:16:31.959 09:57:45 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:31.959 00:16:31.959 real 0m1.471s 00:16:31.959 user 0m1.281s 00:16:31.959 sys 0m0.105s 00:16:31.959 09:57:45 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:31.959 09:57:45 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:16:31.959 ************************************ 00:16:31.959 END TEST accel_dif_verify 00:16:31.959 ************************************ 00:16:31.959 09:57:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:31.959 09:57:45 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:16:31.959 09:57:45 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:16:31.959 09:57:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:31.959 09:57:45 accel -- common/autotest_common.sh@10 -- # set +x 00:16:31.959 ************************************ 00:16:31.959 START TEST accel_dif_generate 00:16:31.959 ************************************ 00:16:31.960 09:57:45 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:16:31.960 09:57:45 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:16:31.960 09:57:45 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:16:31.960 09:57:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:31.960 09:57:45 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:16:31.960 09:57:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:31.960 09:57:45 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:16:31.960 09:57:45 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:16:31.960 09:57:45 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:31.960 09:57:45 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:31.960 09:57:45 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:31.960 09:57:45 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:31.960 09:57:45 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:31.960 09:57:45 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:16:31.960 09:57:45 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:16:31.960 [2024-07-15 09:57:45.295333] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:16:31.960 [2024-07-15 09:57:45.295454] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64205 ] 00:16:31.960 [2024-07-15 09:57:45.432771] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:32.218 [2024-07-15 09:57:45.542337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:32.218 09:57:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:33.154 09:57:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:16:33.154 09:57:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:33.154 09:57:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:33.154 09:57:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:33.154 09:57:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:16:33.154 09:57:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:33.154 09:57:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:33.154 09:57:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:33.154 09:57:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:16:33.154 09:57:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:33.154 09:57:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:33.154 09:57:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:33.154 09:57:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:16:33.154 09:57:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:33.154 09:57:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:33.154 09:57:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:33.154 09:57:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:16:33.154 09:57:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:33.154 09:57:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:33.154 09:57:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:33.154 09:57:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:16:33.154 09:57:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:33.154 09:57:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:33.154 09:57:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:33.154 09:57:46 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:33.154 09:57:46 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:16:33.154 09:57:46 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:33.154 00:16:33.154 real 0m1.468s 00:16:33.154 user 0m1.275s 00:16:33.154 sys 0m0.100s 00:16:33.154 09:57:46 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:33.154 09:57:46 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:16:33.154 ************************************ 00:16:33.154 END TEST accel_dif_generate 00:16:33.154 ************************************ 00:16:33.413 09:57:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:33.413 09:57:46 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:16:33.413 09:57:46 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:16:33.413 09:57:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:33.413 09:57:46 accel -- common/autotest_common.sh@10 -- # set +x 00:16:33.413 ************************************ 00:16:33.413 START TEST accel_dif_generate_copy 00:16:33.413 ************************************ 00:16:33.413 09:57:46 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:16:33.413 09:57:46 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:16:33.413 09:57:46 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:16:33.413 09:57:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:33.413 09:57:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:33.413 09:57:46 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:16:33.413 09:57:46 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:16:33.413 09:57:46 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:16:33.413 09:57:46 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:33.413 09:57:46 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:33.413 09:57:46 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:33.413 09:57:46 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:33.413 09:57:46 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:33.413 09:57:46 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:16:33.413 09:57:46 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:16:33.413 [2024-07-15 09:57:46.819520] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:16:33.413 [2024-07-15 09:57:46.819610] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64244 ] 00:16:33.413 [2024-07-15 09:57:46.960794] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.672 [2024-07-15 09:57:47.070710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.672 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:16:33.672 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:33.672 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:33.672 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:33.672 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:16:33.672 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:33.672 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:33.672 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:33.672 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:16:33.672 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:33.672 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:33.672 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:33.672 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:16:33.672 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:33.672 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:33.672 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:33.672 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:16:33.672 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:33.672 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:33.672 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:33.672 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:16:33.672 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:33.672 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:16:33.672 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:33.672 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:33.672 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:33.672 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:33.672 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:33.672 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:33.672 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:33.672 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:33.672 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:33.672 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:33.672 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:16:33.672 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:33.672 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:33.672 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:33.672 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:16:33.672 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:33.672 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:16:33.672 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:33.672 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:33.672 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:16:33.672 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:33.672 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:33.673 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:33.673 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:16:33.673 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:33.673 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:33.673 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:33.673 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:16:33.673 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:33.673 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:33.673 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:33.673 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:16:33.673 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:33.673 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:33.673 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:33.673 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:16:33.673 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:33.673 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:33.673 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:33.673 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:16:33.673 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:33.673 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:33.673 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:33.673 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:16:33.673 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:33.673 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:33.673 09:57:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:35.058 09:57:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:16:35.058 09:57:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:35.058 09:57:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:35.058 09:57:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:35.058 09:57:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:16:35.058 09:57:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:35.058 09:57:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:35.058 09:57:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:35.058 09:57:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:16:35.058 09:57:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:35.058 09:57:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:35.058 09:57:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:35.058 09:57:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:16:35.058 09:57:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:35.058 09:57:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:35.058 09:57:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:35.058 09:57:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:16:35.058 09:57:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:35.058 09:57:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:35.058 09:57:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:35.058 09:57:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:16:35.058 09:57:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:35.058 09:57:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:35.058 09:57:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:35.058 09:57:48 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:35.058 09:57:48 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:16:35.058 09:57:48 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:35.058 00:16:35.058 real 0m1.473s 00:16:35.058 user 0m0.011s 00:16:35.058 sys 0m0.003s 00:16:35.058 09:57:48 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:35.058 09:57:48 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:16:35.058 ************************************ 00:16:35.058 END TEST accel_dif_generate_copy 00:16:35.058 ************************************ 00:16:35.058 09:57:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:35.058 09:57:48 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:16:35.058 09:57:48 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:35.058 09:57:48 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:16:35.058 09:57:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:35.059 09:57:48 accel -- common/autotest_common.sh@10 -- # set +x 00:16:35.059 ************************************ 00:16:35.059 START TEST accel_comp 00:16:35.059 ************************************ 00:16:35.059 09:57:48 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:16:35.059 [2024-07-15 09:57:48.340530] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:16:35.059 [2024-07-15 09:57:48.340626] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64273 ] 00:16:35.059 [2024-07-15 09:57:48.479237] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.059 [2024-07-15 09:57:48.587699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:35.059 09:57:48 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:16:35.317 09:57:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:35.318 09:57:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:35.318 09:57:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:35.318 09:57:48 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:16:35.318 09:57:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:35.318 09:57:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:35.318 09:57:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:35.318 09:57:48 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:16:35.318 09:57:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:35.318 09:57:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:35.318 09:57:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:35.318 09:57:48 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:16:35.318 09:57:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:35.318 09:57:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:35.318 09:57:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:35.318 09:57:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:16:35.318 09:57:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:35.318 09:57:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:35.318 09:57:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:35.318 09:57:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:16:35.318 09:57:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:35.318 09:57:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:35.318 09:57:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:36.304 09:57:49 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:16:36.304 09:57:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:36.304 09:57:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:36.304 09:57:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:36.304 09:57:49 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:16:36.304 09:57:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:36.304 09:57:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:36.304 09:57:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:36.304 09:57:49 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:16:36.304 09:57:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:36.304 09:57:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:36.304 09:57:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:36.304 09:57:49 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:16:36.304 09:57:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:36.304 09:57:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:36.304 09:57:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:36.304 09:57:49 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:16:36.304 09:57:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:36.304 09:57:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:36.304 09:57:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:36.304 09:57:49 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:16:36.304 09:57:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:36.304 09:57:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:36.304 09:57:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:36.304 09:57:49 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:36.304 09:57:49 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:16:36.304 09:57:49 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:36.304 00:16:36.304 real 0m1.469s 00:16:36.304 user 0m0.014s 00:16:36.304 sys 0m0.000s 00:16:36.304 09:57:49 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:36.304 09:57:49 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:16:36.304 ************************************ 00:16:36.304 END TEST accel_comp 00:16:36.304 ************************************ 00:16:36.304 09:57:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:36.304 09:57:49 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:36.304 09:57:49 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:16:36.304 09:57:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:36.304 09:57:49 accel -- common/autotest_common.sh@10 -- # set +x 00:16:36.304 ************************************ 00:16:36.304 START TEST accel_decomp 00:16:36.304 ************************************ 00:16:36.304 09:57:49 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:36.304 09:57:49 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:16:36.304 09:57:49 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:16:36.304 09:57:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:36.304 09:57:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:36.304 09:57:49 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:36.304 09:57:49 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:16:36.304 09:57:49 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:36.304 09:57:49 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:36.304 09:57:49 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:36.304 09:57:49 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:36.304 09:57:49 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:36.304 09:57:49 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:36.304 09:57:49 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:16:36.304 09:57:49 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:16:36.304 [2024-07-15 09:57:49.870943] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:16:36.304 [2024-07-15 09:57:49.871034] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64313 ] 00:16:36.578 [2024-07-15 09:57:50.008995] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:36.578 [2024-07-15 09:57:50.117979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.835 09:57:50 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:16:36.835 09:57:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:36.835 09:57:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:36.835 09:57:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:36.835 09:57:50 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:16:36.835 09:57:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:36.835 09:57:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:36.835 09:57:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:36.835 09:57:50 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:16:36.835 09:57:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:36.835 09:57:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:36.835 09:57:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:36.835 09:57:50 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:16:36.835 09:57:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:36.835 09:57:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:36.835 09:57:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:36.835 09:57:50 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:16:36.835 09:57:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:36.835 09:57:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:36.835 09:57:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:36.835 09:57:50 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:16:36.835 09:57:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:36.835 09:57:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:36.835 09:57:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:36.835 09:57:50 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:16:36.835 09:57:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:36.835 09:57:50 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:16:36.835 09:57:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:36.835 09:57:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:36.835 09:57:50 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:36.835 09:57:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:36.835 09:57:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:36.835 09:57:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:36.835 09:57:50 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:16:36.835 09:57:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:36.835 09:57:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:36.835 09:57:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:36.835 09:57:50 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:16:36.835 09:57:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:36.835 09:57:50 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:16:36.835 09:57:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:36.835 09:57:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:36.835 09:57:50 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:36.835 09:57:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:36.835 09:57:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:36.835 09:57:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:36.835 09:57:50 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:16:36.835 09:57:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:36.835 09:57:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:36.835 09:57:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:36.835 09:57:50 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:16:36.835 09:57:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:36.835 09:57:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:36.835 09:57:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:36.835 09:57:50 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:16:36.835 09:57:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:36.835 09:57:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:36.836 09:57:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:36.836 09:57:50 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:16:36.836 09:57:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:36.836 09:57:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:36.836 09:57:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:36.836 09:57:50 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:16:36.836 09:57:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:36.836 09:57:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:36.836 09:57:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:36.836 09:57:50 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:16:36.836 09:57:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:36.836 09:57:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:36.836 09:57:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:36.836 09:57:50 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:16:36.836 09:57:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:36.836 09:57:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:36.836 09:57:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:37.767 09:57:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:16:37.767 09:57:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:37.767 09:57:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:37.767 09:57:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:37.767 09:57:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:16:37.767 09:57:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:37.767 09:57:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:37.767 09:57:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:37.767 09:57:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:16:37.767 09:57:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:37.767 09:57:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:37.767 09:57:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:37.767 09:57:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:16:37.767 09:57:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:37.767 09:57:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:37.767 09:57:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:37.767 09:57:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:16:37.767 09:57:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:37.767 09:57:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:37.767 09:57:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:37.767 09:57:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:16:37.767 09:57:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:37.767 09:57:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:37.767 09:57:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:37.767 09:57:51 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:37.767 09:57:51 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:16:37.767 09:57:51 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:37.767 00:16:37.767 real 0m1.466s 00:16:37.767 user 0m0.021s 00:16:37.767 sys 0m0.001s 00:16:37.767 09:57:51 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:37.767 09:57:51 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:16:37.767 ************************************ 00:16:37.767 END TEST accel_decomp 00:16:37.767 ************************************ 00:16:38.025 09:57:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:38.025 09:57:51 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:16:38.025 09:57:51 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:16:38.025 09:57:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:38.025 09:57:51 accel -- common/autotest_common.sh@10 -- # set +x 00:16:38.025 ************************************ 00:16:38.025 START TEST accel_decomp_full 00:16:38.025 ************************************ 00:16:38.025 09:57:51 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:16:38.025 09:57:51 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:16:38.025 09:57:51 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:16:38.025 09:57:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:38.025 09:57:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:38.025 09:57:51 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:16:38.025 09:57:51 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:16:38.025 09:57:51 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:16:38.025 09:57:51 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:38.025 09:57:51 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:38.025 09:57:51 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:38.025 09:57:51 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:38.025 09:57:51 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:38.025 09:57:51 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:16:38.025 09:57:51 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:16:38.025 [2024-07-15 09:57:51.398311] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:16:38.025 [2024-07-15 09:57:51.398406] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64342 ] 00:16:38.025 [2024-07-15 09:57:51.538009] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:38.283 [2024-07-15 09:57:51.647069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:38.283 09:57:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:39.662 09:57:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:16:39.662 09:57:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:39.662 09:57:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:39.662 09:57:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:39.662 09:57:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:16:39.662 09:57:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:39.662 09:57:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:39.662 09:57:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:39.662 09:57:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:16:39.662 09:57:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:39.662 09:57:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:39.662 09:57:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:39.662 09:57:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:16:39.662 09:57:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:39.662 09:57:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:39.662 09:57:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:39.662 09:57:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:16:39.662 09:57:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:39.662 09:57:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:39.662 09:57:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:39.662 09:57:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:16:39.662 09:57:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:39.662 09:57:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:39.662 09:57:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:39.662 09:57:52 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:39.662 09:57:52 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:16:39.662 09:57:52 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:39.662 00:16:39.662 real 0m1.485s 00:16:39.662 user 0m1.297s 00:16:39.662 sys 0m0.101s 00:16:39.662 09:57:52 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:39.662 09:57:52 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:16:39.662 ************************************ 00:16:39.662 END TEST accel_decomp_full 00:16:39.662 ************************************ 00:16:39.662 09:57:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:39.662 09:57:52 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:16:39.662 09:57:52 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:16:39.662 09:57:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:39.662 09:57:52 accel -- common/autotest_common.sh@10 -- # set +x 00:16:39.662 ************************************ 00:16:39.662 START TEST accel_decomp_mcore 00:16:39.662 ************************************ 00:16:39.662 09:57:52 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:16:39.662 09:57:52 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:16:39.662 09:57:52 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:16:39.662 09:57:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:39.662 09:57:52 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:16:39.662 09:57:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:39.662 09:57:52 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:16:39.662 09:57:52 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:16:39.662 09:57:52 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:39.662 09:57:52 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:39.662 09:57:52 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:39.662 09:57:52 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:39.662 09:57:52 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:39.662 09:57:52 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:16:39.662 09:57:52 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:16:39.662 [2024-07-15 09:57:52.943334] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:16:39.662 [2024-07-15 09:57:52.943560] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64382 ] 00:16:39.662 [2024-07-15 09:57:53.086560] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:39.662 [2024-07-15 09:57:53.197348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:39.662 [2024-07-15 09:57:53.197622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:39.662 [2024-07-15 09:57:53.197436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:39.662 [2024-07-15 09:57:53.197624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:39.932 09:57:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:40.867 09:57:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:16:40.867 09:57:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:40.867 09:57:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:40.867 09:57:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:40.867 09:57:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:16:40.867 09:57:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:40.867 09:57:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:40.867 09:57:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:40.867 09:57:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:16:40.867 09:57:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:40.867 09:57:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:40.867 09:57:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:40.867 09:57:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:16:40.867 09:57:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:40.867 09:57:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:40.867 09:57:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:40.867 09:57:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:16:40.867 09:57:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:40.867 09:57:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:40.867 09:57:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:40.867 09:57:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:16:40.867 09:57:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:40.867 09:57:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:40.867 09:57:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:40.867 09:57:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:16:40.867 09:57:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:40.867 09:57:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:40.867 09:57:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:40.867 09:57:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:16:40.867 09:57:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:40.867 09:57:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:40.867 09:57:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:40.867 09:57:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:16:40.867 09:57:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:40.867 09:57:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:40.867 09:57:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:40.867 09:57:54 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:40.867 09:57:54 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:16:40.867 09:57:54 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:40.867 00:16:40.867 real 0m1.497s 00:16:40.867 user 0m4.607s 00:16:40.867 sys 0m0.111s 00:16:40.867 09:57:54 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:40.867 09:57:54 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:16:40.867 ************************************ 00:16:40.867 END TEST accel_decomp_mcore 00:16:40.867 ************************************ 00:16:41.125 09:57:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:41.125 09:57:54 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:16:41.125 09:57:54 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:16:41.125 09:57:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:41.125 09:57:54 accel -- common/autotest_common.sh@10 -- # set +x 00:16:41.125 ************************************ 00:16:41.125 START TEST accel_decomp_full_mcore 00:16:41.125 ************************************ 00:16:41.125 09:57:54 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:16:41.125 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:16:41.125 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:16:41.125 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:41.125 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:41.125 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:16:41.125 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:16:41.125 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:16:41.125 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:41.125 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:41.125 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:41.125 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:41.125 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:41.125 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:16:41.125 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:16:41.125 [2024-07-15 09:57:54.497482] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:16:41.125 [2024-07-15 09:57:54.497576] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64415 ] 00:16:41.125 [2024-07-15 09:57:54.637011] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:41.384 [2024-07-15 09:57:54.754836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:41.384 [2024-07-15 09:57:54.755038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:41.384 [2024-07-15 09:57:54.755159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.384 [2024-07-15 09:57:54.755149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:41.384 09:57:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:42.760 09:57:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:16:42.760 09:57:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:42.760 09:57:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:42.760 09:57:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:42.760 09:57:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:16:42.760 09:57:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:42.760 09:57:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:42.760 09:57:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:42.760 09:57:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:16:42.760 09:57:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:42.760 09:57:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:42.760 09:57:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:42.760 09:57:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:16:42.760 09:57:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:42.760 09:57:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:42.760 09:57:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:42.760 09:57:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:16:42.760 09:57:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:42.760 09:57:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:42.760 09:57:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:42.760 09:57:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:16:42.761 09:57:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:42.761 09:57:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:42.761 09:57:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:42.761 09:57:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:16:42.761 09:57:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:42.761 09:57:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:42.761 09:57:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:42.761 09:57:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:16:42.761 09:57:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:42.761 09:57:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:42.761 09:57:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:42.761 09:57:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:16:42.761 09:57:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:42.761 09:57:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:42.761 09:57:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:42.761 09:57:55 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:42.761 09:57:55 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:16:42.761 ************************************ 00:16:42.761 END TEST accel_decomp_full_mcore 00:16:42.761 ************************************ 00:16:42.761 09:57:55 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:42.761 00:16:42.761 real 0m1.520s 00:16:42.761 user 0m4.675s 00:16:42.761 sys 0m0.119s 00:16:42.761 09:57:55 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:42.761 09:57:55 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:16:42.761 09:57:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:42.761 09:57:56 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:16:42.761 09:57:56 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:16:42.761 09:57:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:42.761 09:57:56 accel -- common/autotest_common.sh@10 -- # set +x 00:16:42.761 ************************************ 00:16:42.761 START TEST accel_decomp_mthread 00:16:42.761 ************************************ 00:16:42.761 09:57:56 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:16:42.761 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:16:42.761 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:16:42.761 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:42.761 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:16:42.761 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:42.761 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:16:42.761 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:16:42.761 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:42.761 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:42.761 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:42.761 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:42.761 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:42.761 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:16:42.761 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:16:42.761 [2024-07-15 09:57:56.069960] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:16:42.761 [2024-07-15 09:57:56.070059] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64458 ] 00:16:42.761 [2024-07-15 09:57:56.215055] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:42.761 [2024-07-15 09:57:56.325221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.019 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:16:43.019 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:43.019 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:43.019 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:43.019 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:16:43.019 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:43.019 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:43.019 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:43.019 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:16:43.019 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:43.019 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:43.019 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:43.019 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:16:43.020 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:43.020 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:43.020 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:43.020 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:16:43.020 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:43.020 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:43.020 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:43.020 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:16:43.020 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:43.020 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:43.020 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:43.020 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:16:43.020 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:43.020 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:16:43.020 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:43.020 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:43.020 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:43.020 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:43.020 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:43.020 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:43.020 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:16:43.020 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:43.020 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:43.020 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:43.020 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:16:43.020 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:43.020 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:16:43.020 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:43.020 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:43.020 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:43.020 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:43.020 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:43.020 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:43.020 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:16:43.020 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:43.020 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:43.020 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:43.020 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:16:43.020 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:43.020 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:43.020 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:43.020 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:16:43.020 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:43.020 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:43.020 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:43.020 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:16:43.020 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:43.020 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:43.020 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:43.020 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:16:43.020 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:43.020 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:43.020 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:43.020 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:16:43.020 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:43.020 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:43.020 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:43.020 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:16:43.020 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:43.020 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:43.020 09:57:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:43.951 09:57:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:16:43.951 09:57:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:43.951 09:57:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:43.951 09:57:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:43.951 09:57:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:16:43.951 09:57:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:43.951 09:57:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:43.951 09:57:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:43.951 09:57:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:16:43.951 09:57:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:43.951 09:57:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:43.951 09:57:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:43.951 09:57:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:16:43.951 09:57:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:43.951 09:57:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:43.951 09:57:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:43.951 09:57:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:16:43.951 09:57:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:43.951 09:57:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:43.951 09:57:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:43.951 09:57:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:16:43.951 09:57:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:43.951 09:57:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:43.951 09:57:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:43.951 09:57:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:16:43.951 09:57:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:43.951 09:57:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:43.951 09:57:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:43.951 09:57:57 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:43.951 09:57:57 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:16:43.951 09:57:57 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:43.951 00:16:43.951 real 0m1.481s 00:16:43.951 user 0m0.020s 00:16:43.951 sys 0m0.000s 00:16:43.951 09:57:57 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:43.951 09:57:57 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:16:43.951 ************************************ 00:16:43.951 END TEST accel_decomp_mthread 00:16:43.951 ************************************ 00:16:44.208 09:57:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:44.208 09:57:57 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:16:44.208 09:57:57 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:16:44.208 09:57:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:44.208 09:57:57 accel -- common/autotest_common.sh@10 -- # set +x 00:16:44.208 ************************************ 00:16:44.208 START TEST accel_decomp_full_mthread 00:16:44.208 ************************************ 00:16:44.208 09:57:57 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:16:44.208 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:16:44.208 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:16:44.208 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:44.208 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:44.208 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:16:44.208 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:16:44.208 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:16:44.208 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:44.208 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:44.208 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:44.208 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:44.208 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:44.208 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:16:44.208 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:16:44.208 [2024-07-15 09:57:57.614229] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:16:44.208 [2024-07-15 09:57:57.614428] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64489 ] 00:16:44.208 [2024-07-15 09:57:57.753859] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.466 [2024-07-15 09:57:57.864614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.466 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:16:44.466 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:44.466 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:44.466 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:44.466 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:16:44.466 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:44.466 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:44.466 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:44.466 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:16:44.466 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:44.466 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:44.466 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:44.466 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:16:44.466 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:44.466 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:44.466 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:44.466 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:16:44.466 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:44.466 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:44.466 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:44.466 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:16:44.466 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:44.466 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:44.466 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:44.466 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:16:44.466 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:44.466 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:16:44.466 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:44.466 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:44.466 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:16:44.466 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:44.466 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:44.466 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:44.467 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:16:44.467 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:44.467 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:44.467 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:44.467 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:16:44.467 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:44.467 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:16:44.467 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:44.467 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:44.467 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:44.467 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:44.467 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:44.467 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:44.467 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:16:44.467 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:44.467 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:44.467 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:44.467 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:16:44.467 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:44.467 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:44.467 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:44.467 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:16:44.467 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:44.467 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:44.467 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:44.467 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:16:44.467 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:44.467 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:44.467 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:44.467 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:16:44.467 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:44.467 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:44.467 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:44.467 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:16:44.467 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:44.467 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:44.467 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:44.467 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:16:44.467 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:44.467 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:44.467 09:57:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:45.876 09:57:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:16:45.876 09:57:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:45.876 09:57:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:45.876 09:57:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:45.876 09:57:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:16:45.876 09:57:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:45.876 09:57:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:45.876 09:57:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:45.876 09:57:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:16:45.876 09:57:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:45.876 09:57:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:45.876 09:57:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:45.876 09:57:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:16:45.876 09:57:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:45.876 09:57:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:45.876 09:57:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:45.876 09:57:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:16:45.876 09:57:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:45.876 09:57:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:45.876 09:57:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:45.876 09:57:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:16:45.876 09:57:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:45.876 09:57:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:45.876 09:57:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:45.876 09:57:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:16:45.876 09:57:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:45.876 09:57:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:45.876 09:57:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:45.876 09:57:59 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:45.876 09:57:59 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:16:45.876 09:57:59 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:45.876 00:16:45.876 real 0m1.522s 00:16:45.876 user 0m0.018s 00:16:45.876 sys 0m0.004s 00:16:45.876 09:57:59 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:45.876 09:57:59 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:16:45.876 ************************************ 00:16:45.876 END TEST accel_decomp_full_mthread 00:16:45.876 ************************************ 00:16:45.876 09:57:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:45.876 09:57:59 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:16:45.876 09:57:59 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:16:45.876 09:57:59 accel -- accel/accel.sh@137 -- # build_accel_config 00:16:45.876 09:57:59 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:45.876 09:57:59 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:16:45.876 09:57:59 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:45.876 09:57:59 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:45.876 09:57:59 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:45.876 09:57:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:45.876 09:57:59 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:45.876 09:57:59 accel -- common/autotest_common.sh@10 -- # set +x 00:16:45.876 09:57:59 accel -- accel/accel.sh@40 -- # local IFS=, 00:16:45.876 09:57:59 accel -- accel/accel.sh@41 -- # jq -r . 00:16:45.876 ************************************ 00:16:45.876 START TEST accel_dif_functional_tests 00:16:45.876 ************************************ 00:16:45.876 09:57:59 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:16:45.876 [2024-07-15 09:57:59.212852] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:16:45.876 [2024-07-15 09:57:59.213021] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64530 ] 00:16:45.876 [2024-07-15 09:57:59.352837] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:46.152 [2024-07-15 09:57:59.464506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:46.152 [2024-07-15 09:57:59.464626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.152 [2024-07-15 09:57:59.464628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:46.152 00:16:46.152 00:16:46.152 CUnit - A unit testing framework for C - Version 2.1-3 00:16:46.152 http://cunit.sourceforge.net/ 00:16:46.152 00:16:46.152 00:16:46.152 Suite: accel_dif 00:16:46.152 Test: verify: DIF generated, GUARD check ...passed 00:16:46.152 Test: verify: DIF generated, APPTAG check ...passed 00:16:46.152 Test: verify: DIF generated, REFTAG check ...passed 00:16:46.152 Test: verify: DIF not generated, GUARD check ...[2024-07-15 09:57:59.538746] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:16:46.152 passed 00:16:46.152 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 09:57:59.538966] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:16:46.152 passed 00:16:46.152 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 09:57:59.539060] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:16:46.152 passed 00:16:46.152 Test: verify: APPTAG correct, APPTAG check ...passed 00:16:46.152 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 09:57:59.539186] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:16:46.152 passed 00:16:46.152 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:16:46.152 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:16:46.152 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:16:46.152 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 09:57:59.539418] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:16:46.152 passed 00:16:46.152 Test: verify copy: DIF generated, GUARD check ...passed 00:16:46.152 Test: verify copy: DIF generated, APPTAG check ...passed 00:16:46.152 Test: verify copy: DIF generated, REFTAG check ...passed 00:16:46.152 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 09:57:59.539655] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:16:46.152 passed 00:16:46.152 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 09:57:59.539833] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:16:46.152 passed 00:16:46.153 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 09:57:59.539908] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:16:46.153 passed 00:16:46.153 Test: generate copy: DIF generated, GUARD check ...passed 00:16:46.153 Test: generate copy: DIF generated, APTTAG check ...passed 00:16:46.153 Test: generate copy: DIF generated, REFTAG check ...passed 00:16:46.153 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:16:46.153 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:16:46.153 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:16:46.153 Test: generate copy: iovecs-len validate ...[2024-07-15 09:57:59.540220] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:16:46.153 passed 00:16:46.153 Test: generate copy: buffer alignment validate ...passed 00:16:46.153 00:16:46.153 Run Summary: Type Total Ran Passed Failed Inactive 00:16:46.153 suites 1 1 n/a 0 0 00:16:46.153 tests 26 26 26 0 0 00:16:46.153 asserts 115 115 115 0 n/a 00:16:46.153 00:16:46.153 Elapsed time = 0.004 seconds 00:16:46.411 ************************************ 00:16:46.411 END TEST accel_dif_functional_tests 00:16:46.411 ************************************ 00:16:46.411 00:16:46.411 real 0m0.578s 00:16:46.411 user 0m0.712s 00:16:46.411 sys 0m0.141s 00:16:46.411 09:57:59 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:46.411 09:57:59 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:16:46.411 09:57:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:46.411 00:16:46.411 real 0m34.088s 00:16:46.411 user 0m35.844s 00:16:46.411 sys 0m3.807s 00:16:46.411 ************************************ 00:16:46.411 END TEST accel 00:16:46.411 ************************************ 00:16:46.411 09:57:59 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:46.411 09:57:59 accel -- common/autotest_common.sh@10 -- # set +x 00:16:46.411 09:57:59 -- common/autotest_common.sh@1142 -- # return 0 00:16:46.411 09:57:59 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:16:46.411 09:57:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:46.411 09:57:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:46.411 09:57:59 -- common/autotest_common.sh@10 -- # set +x 00:16:46.411 ************************************ 00:16:46.411 START TEST accel_rpc 00:16:46.411 ************************************ 00:16:46.411 09:57:59 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:16:46.411 * Looking for test storage... 00:16:46.411 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:16:46.411 09:57:59 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:16:46.411 09:57:59 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=64596 00:16:46.411 09:57:59 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:16:46.411 09:57:59 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 64596 00:16:46.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:46.411 09:57:59 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 64596 ']' 00:16:46.411 09:57:59 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.411 09:57:59 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:46.411 09:57:59 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.411 09:57:59 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:46.411 09:57:59 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.670 [2024-07-15 09:58:00.030431] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:16:46.670 [2024-07-15 09:58:00.030508] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64596 ] 00:16:46.670 [2024-07-15 09:58:00.156635] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:46.929 [2024-07-15 09:58:00.282444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.495 09:58:00 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:47.495 09:58:00 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:16:47.495 09:58:00 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:16:47.495 09:58:00 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:16:47.495 09:58:00 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:16:47.495 09:58:00 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:16:47.495 09:58:00 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:16:47.495 09:58:00 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:47.495 09:58:00 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:47.495 09:58:00 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.495 ************************************ 00:16:47.495 START TEST accel_assign_opcode 00:16:47.495 ************************************ 00:16:47.495 09:58:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:16:47.495 09:58:01 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:16:47.495 09:58:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.495 09:58:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:16:47.495 [2024-07-15 09:58:01.013752] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:16:47.495 09:58:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.495 09:58:01 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:16:47.495 09:58:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.495 09:58:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:16:47.495 [2024-07-15 09:58:01.025726] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:16:47.495 09:58:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.495 09:58:01 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:16:47.495 09:58:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.495 09:58:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:16:47.754 09:58:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.754 09:58:01 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:16:47.754 09:58:01 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:16:47.754 09:58:01 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:16:47.754 09:58:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.754 09:58:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:16:47.754 09:58:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.754 software 00:16:47.754 00:16:47.754 real 0m0.274s 00:16:47.754 user 0m0.054s 00:16:47.754 sys 0m0.013s 00:16:47.754 ************************************ 00:16:47.754 END TEST accel_assign_opcode 00:16:47.754 ************************************ 00:16:47.754 09:58:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:47.754 09:58:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:16:47.754 09:58:01 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:16:47.754 09:58:01 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 64596 00:16:47.754 09:58:01 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 64596 ']' 00:16:47.754 09:58:01 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 64596 00:16:47.754 09:58:01 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:16:47.754 09:58:01 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:47.754 09:58:01 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64596 00:16:48.012 killing process with pid 64596 00:16:48.012 09:58:01 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:48.012 09:58:01 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:48.012 09:58:01 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64596' 00:16:48.012 09:58:01 accel_rpc -- common/autotest_common.sh@967 -- # kill 64596 00:16:48.012 09:58:01 accel_rpc -- common/autotest_common.sh@972 -- # wait 64596 00:16:48.271 00:16:48.271 real 0m1.821s 00:16:48.271 user 0m1.918s 00:16:48.271 sys 0m0.439s 00:16:48.271 09:58:01 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:48.271 ************************************ 00:16:48.271 END TEST accel_rpc 00:16:48.271 ************************************ 00:16:48.271 09:58:01 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:48.271 09:58:01 -- common/autotest_common.sh@1142 -- # return 0 00:16:48.271 09:58:01 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:16:48.271 09:58:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:48.271 09:58:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:48.271 09:58:01 -- common/autotest_common.sh@10 -- # set +x 00:16:48.271 ************************************ 00:16:48.271 START TEST app_cmdline 00:16:48.271 ************************************ 00:16:48.271 09:58:01 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:16:48.271 * Looking for test storage... 00:16:48.271 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:16:48.271 09:58:01 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:16:48.271 09:58:01 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:16:48.271 09:58:01 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=64709 00:16:48.271 09:58:01 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 64709 00:16:48.271 09:58:01 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 64709 ']' 00:16:48.271 09:58:01 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.271 09:58:01 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:48.271 09:58:01 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.271 09:58:01 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:48.271 09:58:01 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:16:48.530 [2024-07-15 09:58:01.865958] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:16:48.530 [2024-07-15 09:58:01.866164] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64709 ] 00:16:48.530 [2024-07-15 09:58:01.996374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:48.788 [2024-07-15 09:58:02.114369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.392 09:58:02 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:49.392 09:58:02 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:16:49.392 09:58:02 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:16:49.652 { 00:16:49.652 "fields": { 00:16:49.652 "commit": "2728651ee", 00:16:49.652 "major": 24, 00:16:49.652 "minor": 9, 00:16:49.652 "patch": 0, 00:16:49.652 "suffix": "-pre" 00:16:49.652 }, 00:16:49.652 "version": "SPDK v24.09-pre git sha1 2728651ee" 00:16:49.652 } 00:16:49.652 09:58:03 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:16:49.652 09:58:03 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:16:49.652 09:58:03 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:16:49.652 09:58:03 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:16:49.652 09:58:03 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:16:49.652 09:58:03 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.652 09:58:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:16:49.652 09:58:03 app_cmdline -- app/cmdline.sh@26 -- # sort 00:16:49.652 09:58:03 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:16:49.652 09:58:03 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.652 09:58:03 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:16:49.652 09:58:03 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:16:49.652 09:58:03 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:16:49.652 09:58:03 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:16:49.652 09:58:03 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:16:49.652 09:58:03 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:49.652 09:58:03 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:49.652 09:58:03 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:49.652 09:58:03 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:49.652 09:58:03 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:49.652 09:58:03 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:49.652 09:58:03 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:49.652 09:58:03 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:49.652 09:58:03 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:16:49.910 2024/07/15 09:58:03 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:16:49.910 request: 00:16:49.910 { 00:16:49.910 "method": "env_dpdk_get_mem_stats", 00:16:49.910 "params": {} 00:16:49.910 } 00:16:49.910 Got JSON-RPC error response 00:16:49.910 GoRPCClient: error on JSON-RPC call 00:16:49.910 09:58:03 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:16:49.910 09:58:03 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:49.910 09:58:03 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:49.910 09:58:03 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:49.910 09:58:03 app_cmdline -- app/cmdline.sh@1 -- # killprocess 64709 00:16:49.910 09:58:03 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 64709 ']' 00:16:49.910 09:58:03 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 64709 00:16:49.910 09:58:03 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:16:49.910 09:58:03 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:49.910 09:58:03 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64709 00:16:49.910 09:58:03 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:49.910 09:58:03 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:49.910 killing process with pid 64709 00:16:49.910 09:58:03 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64709' 00:16:49.910 09:58:03 app_cmdline -- common/autotest_common.sh@967 -- # kill 64709 00:16:49.910 09:58:03 app_cmdline -- common/autotest_common.sh@972 -- # wait 64709 00:16:50.168 00:16:50.168 real 0m2.031s 00:16:50.168 user 0m2.557s 00:16:50.168 sys 0m0.451s 00:16:50.168 09:58:03 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:50.168 09:58:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:16:50.168 ************************************ 00:16:50.168 END TEST app_cmdline 00:16:50.168 ************************************ 00:16:50.426 09:58:03 -- common/autotest_common.sh@1142 -- # return 0 00:16:50.426 09:58:03 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:16:50.426 09:58:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:50.426 09:58:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:50.426 09:58:03 -- common/autotest_common.sh@10 -- # set +x 00:16:50.426 ************************************ 00:16:50.426 START TEST version 00:16:50.426 ************************************ 00:16:50.426 09:58:03 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:16:50.426 * Looking for test storage... 00:16:50.426 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:16:50.426 09:58:03 version -- app/version.sh@17 -- # get_header_version major 00:16:50.426 09:58:03 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:16:50.426 09:58:03 version -- app/version.sh@14 -- # cut -f2 00:16:50.426 09:58:03 version -- app/version.sh@14 -- # tr -d '"' 00:16:50.426 09:58:03 version -- app/version.sh@17 -- # major=24 00:16:50.426 09:58:03 version -- app/version.sh@18 -- # get_header_version minor 00:16:50.426 09:58:03 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:16:50.426 09:58:03 version -- app/version.sh@14 -- # cut -f2 00:16:50.426 09:58:03 version -- app/version.sh@14 -- # tr -d '"' 00:16:50.426 09:58:03 version -- app/version.sh@18 -- # minor=9 00:16:50.426 09:58:03 version -- app/version.sh@19 -- # get_header_version patch 00:16:50.426 09:58:03 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:16:50.426 09:58:03 version -- app/version.sh@14 -- # cut -f2 00:16:50.426 09:58:03 version -- app/version.sh@14 -- # tr -d '"' 00:16:50.426 09:58:03 version -- app/version.sh@19 -- # patch=0 00:16:50.426 09:58:03 version -- app/version.sh@20 -- # get_header_version suffix 00:16:50.426 09:58:03 version -- app/version.sh@14 -- # cut -f2 00:16:50.426 09:58:03 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:16:50.426 09:58:03 version -- app/version.sh@14 -- # tr -d '"' 00:16:50.426 09:58:03 version -- app/version.sh@20 -- # suffix=-pre 00:16:50.426 09:58:03 version -- app/version.sh@22 -- # version=24.9 00:16:50.426 09:58:03 version -- app/version.sh@25 -- # (( patch != 0 )) 00:16:50.426 09:58:03 version -- app/version.sh@28 -- # version=24.9rc0 00:16:50.426 09:58:03 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:16:50.426 09:58:03 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:16:50.426 09:58:03 version -- app/version.sh@30 -- # py_version=24.9rc0 00:16:50.426 09:58:03 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:16:50.426 00:16:50.426 real 0m0.195s 00:16:50.426 user 0m0.094s 00:16:50.426 sys 0m0.143s 00:16:50.426 09:58:03 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:50.426 09:58:03 version -- common/autotest_common.sh@10 -- # set +x 00:16:50.426 ************************************ 00:16:50.426 END TEST version 00:16:50.426 ************************************ 00:16:50.685 09:58:04 -- common/autotest_common.sh@1142 -- # return 0 00:16:50.685 09:58:04 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:16:50.685 09:58:04 -- spdk/autotest.sh@198 -- # uname -s 00:16:50.685 09:58:04 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:16:50.685 09:58:04 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:16:50.685 09:58:04 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:16:50.685 09:58:04 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:16:50.685 09:58:04 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:16:50.685 09:58:04 -- spdk/autotest.sh@260 -- # timing_exit lib 00:16:50.685 09:58:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:50.685 09:58:04 -- common/autotest_common.sh@10 -- # set +x 00:16:50.685 09:58:04 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:16:50.685 09:58:04 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:16:50.685 09:58:04 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:16:50.685 09:58:04 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:16:50.685 09:58:04 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:16:50.685 09:58:04 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:16:50.685 09:58:04 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:16:50.685 09:58:04 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:50.685 09:58:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:50.685 09:58:04 -- common/autotest_common.sh@10 -- # set +x 00:16:50.685 ************************************ 00:16:50.685 START TEST nvmf_tcp 00:16:50.685 ************************************ 00:16:50.685 09:58:04 nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:16:50.685 * Looking for test storage... 00:16:50.685 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:16:50.685 09:58:04 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:16:50.685 09:58:04 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:16:50.685 09:58:04 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:50.685 09:58:04 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:16:50.685 09:58:04 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:50.685 09:58:04 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:50.685 09:58:04 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:50.685 09:58:04 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:50.685 09:58:04 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:50.685 09:58:04 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:50.685 09:58:04 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:50.685 09:58:04 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:50.685 09:58:04 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:50.685 09:58:04 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:50.685 09:58:04 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:16:50.685 09:58:04 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:16:50.685 09:58:04 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:50.685 09:58:04 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:50.685 09:58:04 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:50.685 09:58:04 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:50.685 09:58:04 nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:50.685 09:58:04 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:50.685 09:58:04 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:50.685 09:58:04 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:50.685 09:58:04 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.685 09:58:04 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.685 09:58:04 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.685 09:58:04 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:16:50.685 09:58:04 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.685 09:58:04 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:16:50.685 09:58:04 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:50.685 09:58:04 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:50.685 09:58:04 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:50.685 09:58:04 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:50.685 09:58:04 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:50.685 09:58:04 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:50.685 09:58:04 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:50.685 09:58:04 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:50.685 09:58:04 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:16:50.685 09:58:04 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:16:50.685 09:58:04 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:16:50.685 09:58:04 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:50.685 09:58:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:50.685 09:58:04 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:16:50.685 09:58:04 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:16:50.685 09:58:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:50.685 09:58:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:50.686 09:58:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:50.686 ************************************ 00:16:50.686 START TEST nvmf_example 00:16:50.686 ************************************ 00:16:50.686 09:58:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:16:50.949 * Looking for test storage... 00:16:50.949 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:50.949 09:58:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:50.949 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:16:50.949 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:50.949 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:50.949 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:50.949 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:50.949 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:50.949 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:50.949 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:50.949 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:50.949 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:50.949 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:50.949 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:16:50.949 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:16:50.949 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:50.949 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:50.950 Cannot find device "nvmf_init_br" 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@154 -- # true 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:50.950 Cannot find device "nvmf_tgt_br" 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@155 -- # true 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:50.950 Cannot find device "nvmf_tgt_br2" 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@156 -- # true 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:50.950 Cannot find device "nvmf_init_br" 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@157 -- # true 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:50.950 Cannot find device "nvmf_tgt_br" 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@158 -- # true 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:50.950 Cannot find device "nvmf_tgt_br2" 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@159 -- # true 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:50.950 Cannot find device "nvmf_br" 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@160 -- # true 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:50.950 Cannot find device "nvmf_init_if" 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@161 -- # true 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:50.950 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@162 -- # true 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:50.950 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@163 -- # true 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:50.950 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:51.211 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:51.211 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:51.211 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:51.211 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:51.211 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:51.211 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:51.211 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:51.211 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:51.211 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:51.211 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:51.211 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:51.211 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:51.211 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:16:51.211 00:16:51.211 --- 10.0.0.2 ping statistics --- 00:16:51.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.211 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:16:51.211 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:51.211 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:51.211 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:16:51.211 00:16:51.211 --- 10.0.0.3 ping statistics --- 00:16:51.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.211 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:16:51.211 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:51.211 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:51.211 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:16:51.211 00:16:51.211 --- 10.0.0.1 ping statistics --- 00:16:51.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.211 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:16:51.211 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:51.211 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@433 -- # return 0 00:16:51.211 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:51.211 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:51.211 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:51.211 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:51.211 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:51.211 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:51.211 09:58:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:51.211 09:58:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:16:51.211 09:58:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:16:51.211 09:58:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:51.211 09:58:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:16:51.211 09:58:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:16:51.211 09:58:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:16:51.211 09:58:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=65046 00:16:51.211 09:58:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:51.211 09:58:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 65046 00:16:51.211 09:58:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 65046 ']' 00:16:51.211 09:58:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.212 09:58:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:16:51.212 09:58:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:51.212 09:58:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:51.212 09:58:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:51.212 09:58:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:16:52.150 09:58:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:52.150 09:58:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:16:52.150 09:58:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:16:52.150 09:58:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:52.150 09:58:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:16:52.452 09:58:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:52.452 09:58:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.452 09:58:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:16:52.452 09:58:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.452 09:58:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:16:52.452 09:58:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.452 09:58:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:16:52.452 09:58:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.452 09:58:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:16:52.452 09:58:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:52.452 09:58:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.452 09:58:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:16:52.452 09:58:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.452 09:58:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:16:52.452 09:58:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:52.452 09:58:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.452 09:58:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:16:52.452 09:58:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.452 09:58:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:52.452 09:58:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.452 09:58:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:16:52.452 09:58:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.452 09:58:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:16:52.452 09:58:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:02.437 Initializing NVMe Controllers 00:17:02.437 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:02.437 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:02.437 Initialization complete. Launching workers. 00:17:02.437 ======================================================== 00:17:02.437 Latency(us) 00:17:02.437 Device Information : IOPS MiB/s Average min max 00:17:02.437 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15720.18 61.41 4070.99 731.24 21502.50 00:17:02.437 ======================================================== 00:17:02.437 Total : 15720.18 61.41 4070.99 731.24 21502.50 00:17:02.437 00:17:02.695 09:58:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:17:02.695 09:58:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:17:02.695 09:58:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:02.695 09:58:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:17:02.695 09:58:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:02.695 09:58:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:17:02.695 09:58:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:02.695 09:58:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:02.695 rmmod nvme_tcp 00:17:02.695 rmmod nvme_fabrics 00:17:02.695 rmmod nvme_keyring 00:17:02.695 09:58:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:02.695 09:58:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:17:02.695 09:58:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:17:02.695 09:58:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 65046 ']' 00:17:02.695 09:58:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 65046 00:17:02.695 09:58:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 65046 ']' 00:17:02.695 09:58:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 65046 00:17:02.695 09:58:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:17:02.695 09:58:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:02.695 09:58:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65046 00:17:02.695 09:58:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:17:02.695 09:58:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:17:02.695 killing process with pid 65046 00:17:02.695 09:58:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65046' 00:17:02.695 09:58:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 65046 00:17:02.695 09:58:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 65046 00:17:03.024 nvmf threads initialize successfully 00:17:03.024 bdev subsystem init successfully 00:17:03.024 created a nvmf target service 00:17:03.024 create targets's poll groups done 00:17:03.024 all subsystems of target started 00:17:03.024 nvmf target is running 00:17:03.024 all subsystems of target stopped 00:17:03.025 destroy targets's poll groups done 00:17:03.025 destroyed the nvmf target service 00:17:03.025 bdev subsystem finish successfully 00:17:03.025 nvmf threads destroy successfully 00:17:03.025 09:58:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:03.025 09:58:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:03.025 09:58:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:03.025 09:58:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:03.025 09:58:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:03.025 09:58:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:03.025 09:58:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:03.025 09:58:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.025 09:58:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:03.025 09:58:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:17:03.025 09:58:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:03.025 09:58:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:17:03.025 00:17:03.025 real 0m12.208s 00:17:03.025 user 0m44.194s 00:17:03.025 sys 0m1.721s 00:17:03.025 09:58:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:03.025 09:58:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:17:03.025 ************************************ 00:17:03.025 END TEST nvmf_example 00:17:03.025 ************************************ 00:17:03.025 09:58:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:03.025 09:58:16 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:17:03.025 09:58:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:03.025 09:58:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:03.025 09:58:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:03.025 ************************************ 00:17:03.025 START TEST nvmf_filesystem 00:17:03.025 ************************************ 00:17:03.025 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:17:03.025 * Looking for test storage... 00:17:03.025 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:03.025 09:58:16 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:17:03.025 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:17:03.025 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:17:03.025 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:17:03.025 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:17:03.025 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:17:03.025 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:17:03.025 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:17:03.025 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:17:03.025 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:17:03.025 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:17:03.025 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:17:03.025 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:17:03.025 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:17:03.025 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:17:03.025 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:17:03.025 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:17:03.025 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:17:03.025 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:17:03.025 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:17:03.025 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:17:03.025 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:17:03.025 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:17:03.025 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:17:03.025 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:17:03.025 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:17:03.025 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:17:03.025 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:17:03.025 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=y 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=y 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:17:03.288 09:58:16 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:17:03.288 #define SPDK_CONFIG_H 00:17:03.288 #define SPDK_CONFIG_APPS 1 00:17:03.288 #define SPDK_CONFIG_ARCH native 00:17:03.288 #undef SPDK_CONFIG_ASAN 00:17:03.288 #define SPDK_CONFIG_AVAHI 1 00:17:03.288 #undef SPDK_CONFIG_CET 00:17:03.288 #define SPDK_CONFIG_COVERAGE 1 00:17:03.288 #define SPDK_CONFIG_CROSS_PREFIX 00:17:03.288 #undef SPDK_CONFIG_CRYPTO 00:17:03.288 #undef SPDK_CONFIG_CRYPTO_MLX5 00:17:03.288 #undef SPDK_CONFIG_CUSTOMOCF 00:17:03.288 #undef SPDK_CONFIG_DAOS 00:17:03.288 #define SPDK_CONFIG_DAOS_DIR 00:17:03.288 #define SPDK_CONFIG_DEBUG 1 00:17:03.288 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:17:03.288 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:17:03.288 #define SPDK_CONFIG_DPDK_INC_DIR 00:17:03.288 #define SPDK_CONFIG_DPDK_LIB_DIR 00:17:03.288 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:17:03.288 #undef SPDK_CONFIG_DPDK_UADK 00:17:03.288 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:17:03.288 #define SPDK_CONFIG_EXAMPLES 1 00:17:03.288 #undef SPDK_CONFIG_FC 00:17:03.288 #define SPDK_CONFIG_FC_PATH 00:17:03.288 #define SPDK_CONFIG_FIO_PLUGIN 1 00:17:03.288 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:17:03.288 #undef SPDK_CONFIG_FUSE 00:17:03.288 #undef SPDK_CONFIG_FUZZER 00:17:03.288 #define SPDK_CONFIG_FUZZER_LIB 00:17:03.288 #define SPDK_CONFIG_GOLANG 1 00:17:03.288 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:17:03.288 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:17:03.288 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:17:03.288 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:17:03.288 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:17:03.288 #undef SPDK_CONFIG_HAVE_LIBBSD 00:17:03.288 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:17:03.288 #define SPDK_CONFIG_IDXD 1 00:17:03.288 #define SPDK_CONFIG_IDXD_KERNEL 1 00:17:03.288 #undef SPDK_CONFIG_IPSEC_MB 00:17:03.288 #define SPDK_CONFIG_IPSEC_MB_DIR 00:17:03.288 #define SPDK_CONFIG_ISAL 1 00:17:03.288 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:17:03.288 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:17:03.288 #define SPDK_CONFIG_LIBDIR 00:17:03.288 #undef SPDK_CONFIG_LTO 00:17:03.288 #define SPDK_CONFIG_MAX_LCORES 128 00:17:03.288 #define SPDK_CONFIG_NVME_CUSE 1 00:17:03.288 #undef SPDK_CONFIG_OCF 00:17:03.288 #define SPDK_CONFIG_OCF_PATH 00:17:03.288 #define SPDK_CONFIG_OPENSSL_PATH 00:17:03.288 #undef SPDK_CONFIG_PGO_CAPTURE 00:17:03.288 #define SPDK_CONFIG_PGO_DIR 00:17:03.288 #undef SPDK_CONFIG_PGO_USE 00:17:03.288 #define SPDK_CONFIG_PREFIX /usr/local 00:17:03.288 #undef SPDK_CONFIG_RAID5F 00:17:03.288 #undef SPDK_CONFIG_RBD 00:17:03.288 #define SPDK_CONFIG_RDMA 1 00:17:03.288 #define SPDK_CONFIG_RDMA_PROV verbs 00:17:03.288 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:17:03.288 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:17:03.288 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:17:03.288 #define SPDK_CONFIG_SHARED 1 00:17:03.289 #undef SPDK_CONFIG_SMA 00:17:03.289 #define SPDK_CONFIG_TESTS 1 00:17:03.289 #undef SPDK_CONFIG_TSAN 00:17:03.289 #define SPDK_CONFIG_UBLK 1 00:17:03.289 #define SPDK_CONFIG_UBSAN 1 00:17:03.289 #undef SPDK_CONFIG_UNIT_TESTS 00:17:03.289 #undef SPDK_CONFIG_URING 00:17:03.289 #define SPDK_CONFIG_URING_PATH 00:17:03.289 #undef SPDK_CONFIG_URING_ZNS 00:17:03.289 #define SPDK_CONFIG_USDT 1 00:17:03.289 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:17:03.289 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:17:03.289 #undef SPDK_CONFIG_VFIO_USER 00:17:03.289 #define SPDK_CONFIG_VFIO_USER_DIR 00:17:03.289 #define SPDK_CONFIG_VHOST 1 00:17:03.289 #define SPDK_CONFIG_VIRTIO 1 00:17:03.289 #undef SPDK_CONFIG_VTUNE 00:17:03.289 #define SPDK_CONFIG_VTUNE_DIR 00:17:03.289 #define SPDK_CONFIG_WERROR 1 00:17:03.289 #define SPDK_CONFIG_WPDK_DIR 00:17:03.289 #undef SPDK_CONFIG_XNVME 00:17:03.289 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 0 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:17:03.289 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 1 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 1 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 1 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j10 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 65298 ]] 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 65298 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.Msg1f0 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.Msg1f0/tests/target /tmp/spdk.Msg1f0 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=devtmpfs 00:17:03.290 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=4194304 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=4194304 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6264512512 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6267887616 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=2494353408 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=2507157504 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=12804096 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=btrfs 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=13771440128 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=20314062848 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5258477568 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=btrfs 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=13771440128 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=20314062848 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5258477568 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda2 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext4 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=843546624 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=1012768768 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=100016128 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6267752448 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6267887616 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=135168 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda3 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=vfat 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=92499968 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=104607744 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=12107776 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=1253572608 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=1253576704 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=fuse.sshfs 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=92762251264 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=105088212992 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=6940528640 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:17:03.291 * Looking for test storage... 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/home 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=13771440128 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ btrfs == tmpfs ]] 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ btrfs == ramfs ]] 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ /home == / ]] 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:03.291 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.291 09:58:16 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:17:03.292 09:58:16 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.292 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:17:03.292 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:03.292 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:03.292 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:03.292 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:03.292 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:03.292 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:03.292 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:03.292 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:03.292 09:58:16 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:17:03.292 09:58:16 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:03.292 09:58:16 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:17:03.292 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:03.292 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:03.292 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:03.292 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:03.292 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:03.292 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:03.292 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:03.292 09:58:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.292 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:03.292 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:03.292 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:03.292 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:03.292 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:03.292 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:03.292 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:03.292 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:03.292 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:03.292 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:03.292 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:03.292 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:03.292 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:03.292 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:03.292 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:03.292 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:03.292 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:03.292 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:03.292 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:03.292 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:03.292 Cannot find device "nvmf_tgt_br" 00:17:03.292 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@155 -- # true 00:17:03.292 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:03.292 Cannot find device "nvmf_tgt_br2" 00:17:03.292 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@156 -- # true 00:17:03.292 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:03.292 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:03.292 Cannot find device "nvmf_tgt_br" 00:17:03.292 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@158 -- # true 00:17:03.292 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:03.552 Cannot find device "nvmf_tgt_br2" 00:17:03.552 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@159 -- # true 00:17:03.552 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:03.552 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:03.552 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:03.552 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:03.552 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@162 -- # true 00:17:03.552 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:03.552 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:03.552 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@163 -- # true 00:17:03.552 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:03.552 09:58:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:03.552 09:58:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:03.552 09:58:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:03.552 09:58:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:03.552 09:58:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:03.552 09:58:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:03.552 09:58:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:03.552 09:58:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:03.552 09:58:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:03.552 09:58:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:03.552 09:58:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:03.552 09:58:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:03.552 09:58:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:03.552 09:58:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:03.552 09:58:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:03.552 09:58:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:03.552 09:58:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:03.552 09:58:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:03.552 09:58:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:03.552 09:58:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:03.552 09:58:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:03.552 09:58:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:03.552 09:58:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:03.552 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:03.552 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:17:03.552 00:17:03.552 --- 10.0.0.2 ping statistics --- 00:17:03.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.552 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:17:03.552 09:58:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:03.552 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:03.552 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.029 ms 00:17:03.552 00:17:03.552 --- 10.0.0.3 ping statistics --- 00:17:03.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.552 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:17:03.552 09:58:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:03.552 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:03.552 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:17:03.552 00:17:03.552 --- 10.0.0.1 ping statistics --- 00:17:03.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.552 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:17:03.552 09:58:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:03.552 09:58:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@433 -- # return 0 00:17:03.552 09:58:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:03.552 09:58:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:03.552 09:58:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:03.552 09:58:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:03.552 09:58:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:03.552 09:58:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:03.552 09:58:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:03.811 09:58:17 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:17:03.811 09:58:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:03.811 09:58:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:03.811 09:58:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:17:03.811 ************************************ 00:17:03.811 START TEST nvmf_filesystem_no_in_capsule 00:17:03.811 ************************************ 00:17:03.811 09:58:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:17:03.812 09:58:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:17:03.812 09:58:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:17:03.812 09:58:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:03.812 09:58:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:03.812 09:58:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:03.812 09:58:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:03.812 09:58:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=65463 00:17:03.812 09:58:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 65463 00:17:03.812 09:58:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 65463 ']' 00:17:03.812 09:58:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:03.812 09:58:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:03.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:03.812 09:58:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:03.812 09:58:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:03.812 09:58:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:03.812 [2024-07-15 09:58:17.234899] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:17:03.812 [2024-07-15 09:58:17.234977] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:03.812 [2024-07-15 09:58:17.374798] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:04.071 [2024-07-15 09:58:17.483941] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:04.071 [2024-07-15 09:58:17.483989] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:04.071 [2024-07-15 09:58:17.483996] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:04.071 [2024-07-15 09:58:17.484002] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:04.071 [2024-07-15 09:58:17.484007] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:04.071 [2024-07-15 09:58:17.484187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:04.071 [2024-07-15 09:58:17.484372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:04.071 [2024-07-15 09:58:17.484411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.071 [2024-07-15 09:58:17.484422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:04.638 09:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:04.638 09:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:17:04.638 09:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:04.638 09:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:04.638 09:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:04.638 09:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:04.638 09:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:17:04.638 09:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:17:04.638 09:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.638 09:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:04.638 [2024-07-15 09:58:18.143836] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:04.638 09:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.638 09:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:17:04.638 09:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.638 09:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:04.897 Malloc1 00:17:04.897 09:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.897 09:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:04.897 09:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.897 09:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:04.897 09:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.897 09:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:04.897 09:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.897 09:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:04.897 09:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.897 09:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:04.897 09:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.897 09:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:04.897 [2024-07-15 09:58:18.305578] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:04.897 09:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.897 09:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:17:04.897 09:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:17:04.897 09:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:04.897 09:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:17:04.897 09:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:17:04.897 09:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:17:04.897 09:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.897 09:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:04.897 09:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.897 09:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:04.897 { 00:17:04.897 "aliases": [ 00:17:04.897 "9a824178-0581-4cef-8b91-a70b25b83890" 00:17:04.897 ], 00:17:04.897 "assigned_rate_limits": { 00:17:04.897 "r_mbytes_per_sec": 0, 00:17:04.897 "rw_ios_per_sec": 0, 00:17:04.897 "rw_mbytes_per_sec": 0, 00:17:04.897 "w_mbytes_per_sec": 0 00:17:04.897 }, 00:17:04.897 "block_size": 512, 00:17:04.897 "claim_type": "exclusive_write", 00:17:04.897 "claimed": true, 00:17:04.897 "driver_specific": {}, 00:17:04.897 "memory_domains": [ 00:17:04.897 { 00:17:04.897 "dma_device_id": "system", 00:17:04.897 "dma_device_type": 1 00:17:04.897 }, 00:17:04.897 { 00:17:04.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:04.897 "dma_device_type": 2 00:17:04.897 } 00:17:04.897 ], 00:17:04.897 "name": "Malloc1", 00:17:04.897 "num_blocks": 1048576, 00:17:04.897 "product_name": "Malloc disk", 00:17:04.897 "supported_io_types": { 00:17:04.897 "abort": true, 00:17:04.897 "compare": false, 00:17:04.897 "compare_and_write": false, 00:17:04.897 "copy": true, 00:17:04.897 "flush": true, 00:17:04.897 "get_zone_info": false, 00:17:04.897 "nvme_admin": false, 00:17:04.897 "nvme_io": false, 00:17:04.897 "nvme_io_md": false, 00:17:04.897 "nvme_iov_md": false, 00:17:04.897 "read": true, 00:17:04.897 "reset": true, 00:17:04.897 "seek_data": false, 00:17:04.897 "seek_hole": false, 00:17:04.897 "unmap": true, 00:17:04.897 "write": true, 00:17:04.897 "write_zeroes": true, 00:17:04.897 "zcopy": true, 00:17:04.897 "zone_append": false, 00:17:04.897 "zone_management": false 00:17:04.897 }, 00:17:04.897 "uuid": "9a824178-0581-4cef-8b91-a70b25b83890", 00:17:04.897 "zoned": false 00:17:04.897 } 00:17:04.897 ]' 00:17:04.897 09:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:04.897 09:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:17:04.897 09:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:04.897 09:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:17:04.897 09:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:17:04.897 09:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:17:04.897 09:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:17:04.897 09:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid=a2b6b25a-cc90-4aea-9f09-c06f8a634aec -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:05.157 09:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:17:05.157 09:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:17:05.157 09:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:05.157 09:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:05.157 09:58:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:17:07.078 09:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:07.078 09:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:07.078 09:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:07.078 09:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:07.078 09:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:07.078 09:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:17:07.078 09:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:17:07.078 09:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:17:07.078 09:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:17:07.078 09:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:17:07.078 09:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:17:07.078 09:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:17:07.078 09:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:17:07.079 09:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:17:07.079 09:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:17:07.079 09:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:17:07.079 09:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:17:07.339 09:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:17:07.339 09:58:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:17:08.274 09:58:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:17:08.274 09:58:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:17:08.274 09:58:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:17:08.274 09:58:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:08.274 09:58:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:08.274 ************************************ 00:17:08.274 START TEST filesystem_ext4 00:17:08.274 ************************************ 00:17:08.274 09:58:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:17:08.274 09:58:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:17:08.275 09:58:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:17:08.275 09:58:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:17:08.275 09:58:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:17:08.275 09:58:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:17:08.275 09:58:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:17:08.275 09:58:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:17:08.275 09:58:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:17:08.275 09:58:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:17:08.275 09:58:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:17:08.275 mke2fs 1.46.5 (30-Dec-2021) 00:17:08.532 Discarding device blocks: 0/522240 done 00:17:08.532 Creating filesystem with 522240 1k blocks and 130560 inodes 00:17:08.532 Filesystem UUID: 6d169fef-f6f6-45bd-804e-1e0fba4f7e91 00:17:08.532 Superblock backups stored on blocks: 00:17:08.532 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:17:08.532 00:17:08.532 Allocating group tables: 0/64 done 00:17:08.532 Writing inode tables: 0/64 done 00:17:08.532 Creating journal (8192 blocks): done 00:17:08.532 Writing superblocks and filesystem accounting information: 0/64 done 00:17:08.532 00:17:08.532 09:58:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:17:08.532 09:58:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:17:08.532 09:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:17:08.532 09:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:17:08.790 09:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:17:08.790 09:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:17:08.790 09:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:17:08.790 09:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:17:08.790 09:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 65463 00:17:08.790 09:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:17:08.790 09:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:17:08.790 09:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:17:08.790 09:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:17:08.790 00:17:08.790 real 0m0.346s 00:17:08.790 user 0m0.029s 00:17:08.790 sys 0m0.069s 00:17:08.790 09:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:08.790 09:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:17:08.790 ************************************ 00:17:08.790 END TEST filesystem_ext4 00:17:08.790 ************************************ 00:17:08.790 09:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:17:08.790 09:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:17:08.790 09:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:17:08.790 09:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:08.790 09:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:08.790 ************************************ 00:17:08.790 START TEST filesystem_btrfs 00:17:08.790 ************************************ 00:17:08.790 09:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:17:08.790 09:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:17:08.790 09:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:17:08.790 09:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:17:08.790 09:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:17:08.790 09:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:17:08.790 09:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:17:08.790 09:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:17:08.790 09:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:17:08.790 09:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:17:08.790 09:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:17:08.790 btrfs-progs v6.6.2 00:17:08.790 See https://btrfs.readthedocs.io for more information. 00:17:08.790 00:17:08.790 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:17:08.790 NOTE: several default settings have changed in version 5.15, please make sure 00:17:08.790 this does not affect your deployments: 00:17:08.790 - DUP for metadata (-m dup) 00:17:08.790 - enabled no-holes (-O no-holes) 00:17:08.790 - enabled free-space-tree (-R free-space-tree) 00:17:08.790 00:17:08.790 Label: (null) 00:17:08.790 UUID: 0746363c-9866-44fa-b57e-03e447db2e5b 00:17:08.790 Node size: 16384 00:17:08.790 Sector size: 4096 00:17:08.790 Filesystem size: 510.00MiB 00:17:08.790 Block group profiles: 00:17:08.790 Data: single 8.00MiB 00:17:08.790 Metadata: DUP 32.00MiB 00:17:08.790 System: DUP 8.00MiB 00:17:08.790 SSD detected: yes 00:17:08.790 Zoned device: no 00:17:08.790 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:17:08.790 Runtime features: free-space-tree 00:17:08.790 Checksum: crc32c 00:17:08.790 Number of devices: 1 00:17:08.790 Devices: 00:17:08.790 ID SIZE PATH 00:17:08.790 1 510.00MiB /dev/nvme0n1p1 00:17:08.790 00:17:08.790 09:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:17:08.790 09:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:17:09.049 09:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:17:09.049 09:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:17:09.049 09:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:17:09.049 09:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:17:09.049 09:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:17:09.049 09:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:17:09.049 09:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 65463 00:17:09.049 09:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:17:09.049 09:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:17:09.049 09:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:17:09.049 09:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:17:09.049 ************************************ 00:17:09.049 END TEST filesystem_btrfs 00:17:09.049 ************************************ 00:17:09.049 00:17:09.049 real 0m0.242s 00:17:09.049 user 0m0.017s 00:17:09.049 sys 0m0.087s 00:17:09.049 09:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:09.049 09:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:17:09.049 09:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:17:09.049 09:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:17:09.049 09:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:17:09.049 09:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:09.049 09:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:09.049 ************************************ 00:17:09.049 START TEST filesystem_xfs 00:17:09.049 ************************************ 00:17:09.049 09:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:17:09.049 09:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:17:09.049 09:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:17:09.049 09:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:17:09.049 09:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:17:09.049 09:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:17:09.049 09:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:17:09.049 09:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:17:09.049 09:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:17:09.049 09:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:17:09.049 09:58:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:17:09.049 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:17:09.049 = sectsz=512 attr=2, projid32bit=1 00:17:09.049 = crc=1 finobt=1, sparse=1, rmapbt=0 00:17:09.049 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:17:09.049 data = bsize=4096 blocks=130560, imaxpct=25 00:17:09.049 = sunit=0 swidth=0 blks 00:17:09.049 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:17:09.049 log =internal log bsize=4096 blocks=16384, version=2 00:17:09.049 = sectsz=512 sunit=0 blks, lazy-count=1 00:17:09.049 realtime =none extsz=4096 blocks=0, rtextents=0 00:17:10.036 Discarding blocks...Done. 00:17:10.036 09:58:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:17:10.036 09:58:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:17:12.570 09:58:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:17:12.570 09:58:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:17:12.570 09:58:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:17:12.570 09:58:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:17:12.570 09:58:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:17:12.570 09:58:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:17:12.570 09:58:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 65463 00:17:12.570 09:58:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:17:12.570 09:58:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:17:12.570 09:58:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:17:12.570 09:58:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:17:12.570 ************************************ 00:17:12.570 END TEST filesystem_xfs 00:17:12.570 ************************************ 00:17:12.570 00:17:12.570 real 0m3.082s 00:17:12.570 user 0m0.027s 00:17:12.570 sys 0m0.071s 00:17:12.570 09:58:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:12.570 09:58:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:17:12.570 09:58:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:17:12.570 09:58:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:17:12.570 09:58:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:17:12.570 09:58:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:12.570 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:12.570 09:58:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:12.570 09:58:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:17:12.570 09:58:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:12.570 09:58:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:12.570 09:58:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:12.570 09:58:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:12.570 09:58:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:17:12.570 09:58:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:12.570 09:58:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.570 09:58:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:12.570 09:58:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.570 09:58:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:12.571 09:58:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 65463 00:17:12.571 09:58:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 65463 ']' 00:17:12.571 09:58:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 65463 00:17:12.571 09:58:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:17:12.571 09:58:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:12.571 09:58:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65463 00:17:12.571 killing process with pid 65463 00:17:12.571 09:58:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:12.571 09:58:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:12.571 09:58:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65463' 00:17:12.571 09:58:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 65463 00:17:12.571 09:58:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 65463 00:17:12.830 ************************************ 00:17:12.830 END TEST nvmf_filesystem_no_in_capsule 00:17:12.830 ************************************ 00:17:12.830 09:58:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:17:12.830 00:17:12.830 real 0m8.989s 00:17:12.830 user 0m34.376s 00:17:12.830 sys 0m1.277s 00:17:12.830 09:58:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:12.830 09:58:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:12.830 09:58:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:17:12.830 09:58:26 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:17:12.830 09:58:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:12.830 09:58:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:12.830 09:58:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:17:12.830 ************************************ 00:17:12.830 START TEST nvmf_filesystem_in_capsule 00:17:12.830 ************************************ 00:17:12.830 09:58:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:17:12.830 09:58:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:17:12.830 09:58:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:17:12.830 09:58:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:12.830 09:58:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:12.830 09:58:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:12.830 09:58:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=65762 00:17:12.830 09:58:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 65762 00:17:12.830 09:58:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:12.830 09:58:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 65762 ']' 00:17:12.830 09:58:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:12.830 09:58:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:12.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:12.830 09:58:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:12.830 09:58:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:12.830 09:58:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:12.830 [2024-07-15 09:58:26.286237] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:17:12.830 [2024-07-15 09:58:26.286316] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:12.830 [2024-07-15 09:58:26.411550] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:13.089 [2024-07-15 09:58:26.517587] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:13.089 [2024-07-15 09:58:26.517637] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:13.089 [2024-07-15 09:58:26.517644] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:13.089 [2024-07-15 09:58:26.517649] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:13.089 [2024-07-15 09:58:26.517654] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:13.089 [2024-07-15 09:58:26.517874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:13.089 [2024-07-15 09:58:26.517996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:13.089 [2024-07-15 09:58:26.518273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:13.089 [2024-07-15 09:58:26.518279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:13.655 09:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:13.655 09:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:17:13.655 09:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:13.655 09:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:13.655 09:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:13.913 09:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:13.913 09:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:17:13.913 09:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:17:13.913 09:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.913 09:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:13.913 [2024-07-15 09:58:27.257394] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:13.913 09:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.913 09:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:17:13.913 09:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.913 09:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:13.913 Malloc1 00:17:13.913 09:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.913 09:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:13.913 09:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.913 09:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:13.913 09:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.913 09:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:13.913 09:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.913 09:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:13.913 09:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.913 09:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:13.913 09:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.913 09:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:13.913 [2024-07-15 09:58:27.423962] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:13.913 09:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.913 09:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:17:13.913 09:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:17:13.913 09:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:13.913 09:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:17:13.913 09:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:17:13.913 09:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:17:13.913 09:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.913 09:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:13.913 09:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.913 09:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:13.913 { 00:17:13.913 "aliases": [ 00:17:13.913 "6770514d-f035-486c-af5b-9e15ed4cffb5" 00:17:13.913 ], 00:17:13.914 "assigned_rate_limits": { 00:17:13.914 "r_mbytes_per_sec": 0, 00:17:13.914 "rw_ios_per_sec": 0, 00:17:13.914 "rw_mbytes_per_sec": 0, 00:17:13.914 "w_mbytes_per_sec": 0 00:17:13.914 }, 00:17:13.914 "block_size": 512, 00:17:13.914 "claim_type": "exclusive_write", 00:17:13.914 "claimed": true, 00:17:13.914 "driver_specific": {}, 00:17:13.914 "memory_domains": [ 00:17:13.914 { 00:17:13.914 "dma_device_id": "system", 00:17:13.914 "dma_device_type": 1 00:17:13.914 }, 00:17:13.914 { 00:17:13.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:13.914 "dma_device_type": 2 00:17:13.914 } 00:17:13.914 ], 00:17:13.914 "name": "Malloc1", 00:17:13.914 "num_blocks": 1048576, 00:17:13.914 "product_name": "Malloc disk", 00:17:13.914 "supported_io_types": { 00:17:13.914 "abort": true, 00:17:13.914 "compare": false, 00:17:13.914 "compare_and_write": false, 00:17:13.914 "copy": true, 00:17:13.914 "flush": true, 00:17:13.914 "get_zone_info": false, 00:17:13.914 "nvme_admin": false, 00:17:13.914 "nvme_io": false, 00:17:13.914 "nvme_io_md": false, 00:17:13.914 "nvme_iov_md": false, 00:17:13.914 "read": true, 00:17:13.914 "reset": true, 00:17:13.914 "seek_data": false, 00:17:13.914 "seek_hole": false, 00:17:13.914 "unmap": true, 00:17:13.914 "write": true, 00:17:13.914 "write_zeroes": true, 00:17:13.914 "zcopy": true, 00:17:13.914 "zone_append": false, 00:17:13.914 "zone_management": false 00:17:13.914 }, 00:17:13.914 "uuid": "6770514d-f035-486c-af5b-9e15ed4cffb5", 00:17:13.914 "zoned": false 00:17:13.914 } 00:17:13.914 ]' 00:17:13.914 09:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:14.172 09:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:17:14.172 09:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:14.172 09:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:17:14.172 09:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:17:14.172 09:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:17:14.172 09:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:17:14.172 09:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid=a2b6b25a-cc90-4aea-9f09-c06f8a634aec -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:14.172 09:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:17:14.172 09:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:17:14.172 09:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:14.172 09:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:14.172 09:58:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:17:16.746 09:58:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:16.746 09:58:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:16.746 09:58:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:16.746 09:58:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:16.746 09:58:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:16.746 09:58:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:17:16.746 09:58:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:17:16.746 09:58:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:17:16.746 09:58:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:17:16.746 09:58:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:17:16.746 09:58:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:17:16.746 09:58:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:17:16.746 09:58:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:17:16.746 09:58:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:17:16.746 09:58:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:17:16.746 09:58:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:17:16.746 09:58:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:17:16.746 09:58:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:17:16.746 09:58:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:17:17.699 09:58:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:17:17.699 09:58:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:17:17.699 09:58:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:17:17.699 09:58:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:17.699 09:58:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:17.699 ************************************ 00:17:17.699 START TEST filesystem_in_capsule_ext4 00:17:17.699 ************************************ 00:17:17.699 09:58:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:17:17.699 09:58:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:17:17.699 09:58:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:17:17.699 09:58:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:17:17.699 09:58:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:17:17.699 09:58:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:17:17.699 09:58:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:17:17.699 09:58:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:17:17.699 09:58:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:17:17.699 09:58:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:17:17.699 09:58:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:17:17.699 mke2fs 1.46.5 (30-Dec-2021) 00:17:17.699 Discarding device blocks: 0/522240 done 00:17:17.699 Creating filesystem with 522240 1k blocks and 130560 inodes 00:17:17.699 Filesystem UUID: e43c4758-44b1-474c-89b9-f10ea4c1973a 00:17:17.699 Superblock backups stored on blocks: 00:17:17.699 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:17:17.699 00:17:17.699 Allocating group tables: 0/64 done 00:17:17.699 Writing inode tables: 0/64 done 00:17:17.699 Creating journal (8192 blocks): done 00:17:17.699 Writing superblocks and filesystem accounting information: 0/64 done 00:17:17.699 00:17:17.699 09:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:17:17.699 09:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:17:17.699 09:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:17:17.699 09:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:17:17.699 09:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:17:17.699 09:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:17:17.699 09:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:17:17.699 09:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:17:17.699 09:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 65762 00:17:17.699 09:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:17:17.699 09:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:17:17.959 09:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:17:17.959 09:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:17:17.959 ************************************ 00:17:17.959 END TEST filesystem_in_capsule_ext4 00:17:17.959 ************************************ 00:17:17.959 00:17:17.959 real 0m0.350s 00:17:17.959 user 0m0.023s 00:17:17.959 sys 0m0.065s 00:17:17.959 09:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:17.959 09:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:17:17.959 09:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:17:17.959 09:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:17:17.959 09:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:17:17.959 09:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:17.959 09:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:17.959 ************************************ 00:17:17.959 START TEST filesystem_in_capsule_btrfs 00:17:17.959 ************************************ 00:17:17.959 09:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:17:17.959 09:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:17:17.959 09:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:17:17.959 09:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:17:17.959 09:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:17:17.959 09:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:17:17.959 09:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:17:17.959 09:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:17:17.959 09:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:17:17.959 09:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:17:17.959 09:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:17:17.959 btrfs-progs v6.6.2 00:17:17.959 See https://btrfs.readthedocs.io for more information. 00:17:17.959 00:17:17.959 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:17:17.959 NOTE: several default settings have changed in version 5.15, please make sure 00:17:17.959 this does not affect your deployments: 00:17:17.959 - DUP for metadata (-m dup) 00:17:17.959 - enabled no-holes (-O no-holes) 00:17:17.959 - enabled free-space-tree (-R free-space-tree) 00:17:17.959 00:17:17.959 Label: (null) 00:17:17.959 UUID: 523e5d83-1e54-40dc-a163-557a1d118479 00:17:17.959 Node size: 16384 00:17:17.959 Sector size: 4096 00:17:17.959 Filesystem size: 510.00MiB 00:17:17.959 Block group profiles: 00:17:17.959 Data: single 8.00MiB 00:17:17.959 Metadata: DUP 32.00MiB 00:17:17.959 System: DUP 8.00MiB 00:17:17.959 SSD detected: yes 00:17:17.959 Zoned device: no 00:17:17.959 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:17:17.959 Runtime features: free-space-tree 00:17:17.959 Checksum: crc32c 00:17:17.959 Number of devices: 1 00:17:17.959 Devices: 00:17:17.959 ID SIZE PATH 00:17:17.959 1 510.00MiB /dev/nvme0n1p1 00:17:17.959 00:17:17.959 09:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:17:17.959 09:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:17:17.959 09:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:17:17.959 09:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:17:17.959 09:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:17:17.959 09:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:17:18.219 09:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:17:18.219 09:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:17:18.219 09:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 65762 00:17:18.219 09:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:17:18.219 09:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:17:18.219 09:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:17:18.219 09:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:17:18.219 00:17:18.219 real 0m0.224s 00:17:18.219 user 0m0.025s 00:17:18.219 sys 0m0.085s 00:17:18.219 09:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:18.219 09:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:17:18.219 ************************************ 00:17:18.219 END TEST filesystem_in_capsule_btrfs 00:17:18.219 ************************************ 00:17:18.219 09:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:17:18.219 09:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:17:18.219 09:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:17:18.219 09:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:18.219 09:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:18.219 ************************************ 00:17:18.219 START TEST filesystem_in_capsule_xfs 00:17:18.219 ************************************ 00:17:18.219 09:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:17:18.219 09:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:17:18.219 09:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:17:18.219 09:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:17:18.219 09:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:17:18.219 09:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:17:18.219 09:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:17:18.219 09:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:17:18.219 09:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:17:18.219 09:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:17:18.219 09:58:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:17:18.219 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:17:18.219 = sectsz=512 attr=2, projid32bit=1 00:17:18.219 = crc=1 finobt=1, sparse=1, rmapbt=0 00:17:18.219 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:17:18.219 data = bsize=4096 blocks=130560, imaxpct=25 00:17:18.219 = sunit=0 swidth=0 blks 00:17:18.219 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:17:18.219 log =internal log bsize=4096 blocks=16384, version=2 00:17:18.219 = sectsz=512 sunit=0 blks, lazy-count=1 00:17:18.219 realtime =none extsz=4096 blocks=0, rtextents=0 00:17:19.156 Discarding blocks...Done. 00:17:19.156 09:58:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:17:19.156 09:58:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:17:21.059 09:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:17:21.059 09:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:17:21.059 09:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:17:21.059 09:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:17:21.059 09:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:17:21.059 09:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:17:21.059 09:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 65762 00:17:21.059 09:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:17:21.059 09:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:17:21.059 09:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:17:21.059 09:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:17:21.059 00:17:21.059 real 0m2.609s 00:17:21.059 user 0m0.024s 00:17:21.059 sys 0m0.080s 00:17:21.059 09:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:21.059 09:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:17:21.059 ************************************ 00:17:21.059 END TEST filesystem_in_capsule_xfs 00:17:21.059 ************************************ 00:17:21.059 09:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:17:21.059 09:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:17:21.059 09:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:17:21.059 09:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:21.059 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:21.059 09:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:21.059 09:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:17:21.059 09:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:21.059 09:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:21.059 09:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:21.059 09:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:21.059 09:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:17:21.059 09:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:21.059 09:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.059 09:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:21.059 09:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.059 09:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:21.059 09:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 65762 00:17:21.059 09:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 65762 ']' 00:17:21.059 09:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 65762 00:17:21.059 09:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:17:21.059 09:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:21.059 09:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65762 00:17:21.059 09:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:21.059 09:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:21.059 killing process with pid 65762 00:17:21.059 09:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65762' 00:17:21.059 09:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 65762 00:17:21.059 09:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 65762 00:17:21.318 09:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:17:21.318 00:17:21.318 real 0m8.599s 00:17:21.318 user 0m33.051s 00:17:21.318 sys 0m1.245s 00:17:21.318 09:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:21.318 09:58:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:17:21.318 ************************************ 00:17:21.319 END TEST nvmf_filesystem_in_capsule 00:17:21.319 ************************************ 00:17:21.319 09:58:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:17:21.319 09:58:34 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:17:21.319 09:58:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:21.319 09:58:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:17:21.577 09:58:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:21.577 09:58:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:17:21.577 09:58:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:21.577 09:58:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:21.577 rmmod nvme_tcp 00:17:21.577 rmmod nvme_fabrics 00:17:21.577 rmmod nvme_keyring 00:17:21.577 09:58:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:21.577 09:58:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:17:21.577 09:58:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:17:21.577 09:58:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:21.577 09:58:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:21.577 09:58:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:21.577 09:58:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:21.577 09:58:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:21.577 09:58:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:21.577 09:58:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:21.577 09:58:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:21.577 09:58:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:21.577 09:58:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:21.577 ************************************ 00:17:21.578 END TEST nvmf_filesystem 00:17:21.578 ************************************ 00:17:21.578 00:17:21.578 real 0m18.553s 00:17:21.578 user 1m7.738s 00:17:21.578 sys 0m2.984s 00:17:21.578 09:58:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:21.578 09:58:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:17:21.578 09:58:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:21.578 09:58:35 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:17:21.578 09:58:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:21.578 09:58:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:21.578 09:58:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:21.578 ************************************ 00:17:21.578 START TEST nvmf_target_discovery 00:17:21.578 ************************************ 00:17:21.578 09:58:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:17:21.837 * Looking for test storage... 00:17:21.837 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:21.837 09:58:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:21.837 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:17:21.837 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:21.837 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:21.837 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:21.837 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:21.837 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:21.837 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:21.837 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:21.837 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:21.837 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:21.837 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:21.837 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:17:21.837 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:17:21.837 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:21.838 Cannot find device "nvmf_tgt_br" 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@155 -- # true 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:21.838 Cannot find device "nvmf_tgt_br2" 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@156 -- # true 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:21.838 Cannot find device "nvmf_tgt_br" 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@158 -- # true 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:21.838 Cannot find device "nvmf_tgt_br2" 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@159 -- # true 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:21.838 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@162 -- # true 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:21.838 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@163 -- # true 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:21.838 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:22.097 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:22.097 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:22.097 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:22.097 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:22.097 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:22.097 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:22.097 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:22.097 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:22.097 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:22.097 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:22.097 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:22.097 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:22.097 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:22.097 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:22.097 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:22.097 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:22.097 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:22.097 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:22.097 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:22.098 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:22.098 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:22.098 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:17:22.098 00:17:22.098 --- 10.0.0.2 ping statistics --- 00:17:22.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:22.098 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:17:22.098 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:22.098 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:22.098 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:17:22.098 00:17:22.098 --- 10.0.0.3 ping statistics --- 00:17:22.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:22.098 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:17:22.098 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:22.098 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:22.098 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:17:22.098 00:17:22.098 --- 10.0.0.1 ping statistics --- 00:17:22.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:22.098 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:17:22.098 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:22.098 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@433 -- # return 0 00:17:22.098 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:22.098 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:22.098 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:22.098 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:22.098 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:22.098 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:22.098 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:22.098 09:58:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:17:22.098 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:22.098 09:58:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:22.098 09:58:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:22.098 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=66223 00:17:22.098 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:22.098 09:58:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 66223 00:17:22.098 09:58:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 66223 ']' 00:17:22.098 09:58:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.098 09:58:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:22.098 09:58:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.098 09:58:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:22.098 09:58:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:22.098 [2024-07-15 09:58:35.639729] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:17:22.098 [2024-07-15 09:58:35.639829] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:22.357 [2024-07-15 09:58:35.779415] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:22.357 [2024-07-15 09:58:35.884587] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:22.357 [2024-07-15 09:58:35.884635] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:22.357 [2024-07-15 09:58:35.884643] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:22.357 [2024-07-15 09:58:35.884648] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:22.357 [2024-07-15 09:58:35.884652] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:22.357 [2024-07-15 09:58:35.884838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:22.357 [2024-07-15 09:58:35.884987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:22.357 [2024-07-15 09:58:35.885741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.357 [2024-07-15 09:58:35.885743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:23.293 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:23.293 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:17:23.293 09:58:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:23.293 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:23.293 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:23.293 09:58:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:23.293 09:58:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:23.293 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.293 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:23.293 [2024-07-15 09:58:36.594400] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:23.293 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.293 09:58:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:17:23.293 09:58:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:17:23.293 09:58:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:17:23.293 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.293 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:23.293 Null1 00:17:23.293 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.293 09:58:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:23.293 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.293 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:23.293 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.293 09:58:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:17:23.293 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.293 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:23.293 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.293 09:58:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:23.293 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.293 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:23.293 [2024-07-15 09:58:36.660726] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:23.293 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.293 09:58:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:17:23.293 09:58:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:17:23.294 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.294 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:23.294 Null2 00:17:23.294 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.294 09:58:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:17:23.294 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.294 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:23.294 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.294 09:58:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:17:23.294 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.294 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:23.294 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.294 09:58:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:17:23.294 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.294 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:23.294 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.294 09:58:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:17:23.294 09:58:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:17:23.294 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.294 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:23.294 Null3 00:17:23.294 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.294 09:58:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:17:23.294 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.294 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:23.294 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.294 09:58:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:17:23.294 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.294 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:23.294 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.294 09:58:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:17:23.294 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.294 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:23.294 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.294 09:58:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:17:23.294 09:58:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:17:23.294 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.294 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:23.294 Null4 00:17:23.294 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.294 09:58:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:17:23.294 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.294 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:23.294 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.294 09:58:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:17:23.294 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.294 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:23.294 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.294 09:58:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:17:23.294 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.294 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:23.294 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.294 09:58:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:23.294 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.294 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:23.294 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.294 09:58:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:17:23.294 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.294 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:23.294 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.294 09:58:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid=a2b6b25a-cc90-4aea-9f09-c06f8a634aec -t tcp -a 10.0.0.2 -s 4420 00:17:23.553 00:17:23.553 Discovery Log Number of Records 6, Generation counter 6 00:17:23.553 =====Discovery Log Entry 0====== 00:17:23.553 trtype: tcp 00:17:23.553 adrfam: ipv4 00:17:23.553 subtype: current discovery subsystem 00:17:23.553 treq: not required 00:17:23.553 portid: 0 00:17:23.553 trsvcid: 4420 00:17:23.553 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:23.553 traddr: 10.0.0.2 00:17:23.553 eflags: explicit discovery connections, duplicate discovery information 00:17:23.553 sectype: none 00:17:23.553 =====Discovery Log Entry 1====== 00:17:23.553 trtype: tcp 00:17:23.553 adrfam: ipv4 00:17:23.553 subtype: nvme subsystem 00:17:23.553 treq: not required 00:17:23.553 portid: 0 00:17:23.553 trsvcid: 4420 00:17:23.553 subnqn: nqn.2016-06.io.spdk:cnode1 00:17:23.553 traddr: 10.0.0.2 00:17:23.553 eflags: none 00:17:23.553 sectype: none 00:17:23.553 =====Discovery Log Entry 2====== 00:17:23.553 trtype: tcp 00:17:23.553 adrfam: ipv4 00:17:23.553 subtype: nvme subsystem 00:17:23.553 treq: not required 00:17:23.553 portid: 0 00:17:23.553 trsvcid: 4420 00:17:23.553 subnqn: nqn.2016-06.io.spdk:cnode2 00:17:23.553 traddr: 10.0.0.2 00:17:23.553 eflags: none 00:17:23.553 sectype: none 00:17:23.553 =====Discovery Log Entry 3====== 00:17:23.553 trtype: tcp 00:17:23.553 adrfam: ipv4 00:17:23.553 subtype: nvme subsystem 00:17:23.553 treq: not required 00:17:23.553 portid: 0 00:17:23.553 trsvcid: 4420 00:17:23.553 subnqn: nqn.2016-06.io.spdk:cnode3 00:17:23.553 traddr: 10.0.0.2 00:17:23.553 eflags: none 00:17:23.553 sectype: none 00:17:23.553 =====Discovery Log Entry 4====== 00:17:23.553 trtype: tcp 00:17:23.553 adrfam: ipv4 00:17:23.553 subtype: nvme subsystem 00:17:23.553 treq: not required 00:17:23.553 portid: 0 00:17:23.553 trsvcid: 4420 00:17:23.553 subnqn: nqn.2016-06.io.spdk:cnode4 00:17:23.553 traddr: 10.0.0.2 00:17:23.553 eflags: none 00:17:23.553 sectype: none 00:17:23.553 =====Discovery Log Entry 5====== 00:17:23.553 trtype: tcp 00:17:23.553 adrfam: ipv4 00:17:23.553 subtype: discovery subsystem referral 00:17:23.553 treq: not required 00:17:23.553 portid: 0 00:17:23.553 trsvcid: 4430 00:17:23.553 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:23.553 traddr: 10.0.0.2 00:17:23.553 eflags: none 00:17:23.553 sectype: none 00:17:23.553 Perform nvmf subsystem discovery via RPC 00:17:23.553 09:58:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:17:23.553 09:58:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:17:23.553 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.553 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:23.553 [ 00:17:23.553 { 00:17:23.553 "allow_any_host": true, 00:17:23.553 "hosts": [], 00:17:23.553 "listen_addresses": [ 00:17:23.553 { 00:17:23.553 "adrfam": "IPv4", 00:17:23.553 "traddr": "10.0.0.2", 00:17:23.553 "trsvcid": "4420", 00:17:23.553 "trtype": "TCP" 00:17:23.553 } 00:17:23.553 ], 00:17:23.553 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:23.553 "subtype": "Discovery" 00:17:23.553 }, 00:17:23.553 { 00:17:23.553 "allow_any_host": true, 00:17:23.553 "hosts": [], 00:17:23.553 "listen_addresses": [ 00:17:23.553 { 00:17:23.553 "adrfam": "IPv4", 00:17:23.553 "traddr": "10.0.0.2", 00:17:23.553 "trsvcid": "4420", 00:17:23.553 "trtype": "TCP" 00:17:23.553 } 00:17:23.553 ], 00:17:23.553 "max_cntlid": 65519, 00:17:23.553 "max_namespaces": 32, 00:17:23.553 "min_cntlid": 1, 00:17:23.553 "model_number": "SPDK bdev Controller", 00:17:23.553 "namespaces": [ 00:17:23.553 { 00:17:23.553 "bdev_name": "Null1", 00:17:23.553 "name": "Null1", 00:17:23.553 "nguid": "518B83E38E894EDEBFE99FF6CB68A45C", 00:17:23.553 "nsid": 1, 00:17:23.553 "uuid": "518b83e3-8e89-4ede-bfe9-9ff6cb68a45c" 00:17:23.553 } 00:17:23.553 ], 00:17:23.553 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:23.553 "serial_number": "SPDK00000000000001", 00:17:23.553 "subtype": "NVMe" 00:17:23.553 }, 00:17:23.553 { 00:17:23.553 "allow_any_host": true, 00:17:23.553 "hosts": [], 00:17:23.553 "listen_addresses": [ 00:17:23.553 { 00:17:23.553 "adrfam": "IPv4", 00:17:23.553 "traddr": "10.0.0.2", 00:17:23.553 "trsvcid": "4420", 00:17:23.553 "trtype": "TCP" 00:17:23.553 } 00:17:23.553 ], 00:17:23.553 "max_cntlid": 65519, 00:17:23.553 "max_namespaces": 32, 00:17:23.553 "min_cntlid": 1, 00:17:23.553 "model_number": "SPDK bdev Controller", 00:17:23.553 "namespaces": [ 00:17:23.553 { 00:17:23.553 "bdev_name": "Null2", 00:17:23.553 "name": "Null2", 00:17:23.553 "nguid": "9B132C677C164E939DDD1F35C3AFF6D4", 00:17:23.553 "nsid": 1, 00:17:23.553 "uuid": "9b132c67-7c16-4e93-9ddd-1f35c3aff6d4" 00:17:23.553 } 00:17:23.553 ], 00:17:23.553 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:17:23.553 "serial_number": "SPDK00000000000002", 00:17:23.553 "subtype": "NVMe" 00:17:23.553 }, 00:17:23.553 { 00:17:23.553 "allow_any_host": true, 00:17:23.553 "hosts": [], 00:17:23.553 "listen_addresses": [ 00:17:23.553 { 00:17:23.553 "adrfam": "IPv4", 00:17:23.553 "traddr": "10.0.0.2", 00:17:23.553 "trsvcid": "4420", 00:17:23.553 "trtype": "TCP" 00:17:23.553 } 00:17:23.553 ], 00:17:23.553 "max_cntlid": 65519, 00:17:23.553 "max_namespaces": 32, 00:17:23.553 "min_cntlid": 1, 00:17:23.553 "model_number": "SPDK bdev Controller", 00:17:23.553 "namespaces": [ 00:17:23.553 { 00:17:23.553 "bdev_name": "Null3", 00:17:23.553 "name": "Null3", 00:17:23.553 "nguid": "C832D40DE3BA45C6AD89B8D2CAEB9593", 00:17:23.553 "nsid": 1, 00:17:23.553 "uuid": "c832d40d-e3ba-45c6-ad89-b8d2caeb9593" 00:17:23.553 } 00:17:23.553 ], 00:17:23.553 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:17:23.553 "serial_number": "SPDK00000000000003", 00:17:23.553 "subtype": "NVMe" 00:17:23.553 }, 00:17:23.553 { 00:17:23.553 "allow_any_host": true, 00:17:23.553 "hosts": [], 00:17:23.553 "listen_addresses": [ 00:17:23.553 { 00:17:23.553 "adrfam": "IPv4", 00:17:23.553 "traddr": "10.0.0.2", 00:17:23.553 "trsvcid": "4420", 00:17:23.553 "trtype": "TCP" 00:17:23.553 } 00:17:23.553 ], 00:17:23.553 "max_cntlid": 65519, 00:17:23.553 "max_namespaces": 32, 00:17:23.554 "min_cntlid": 1, 00:17:23.554 "model_number": "SPDK bdev Controller", 00:17:23.554 "namespaces": [ 00:17:23.554 { 00:17:23.554 "bdev_name": "Null4", 00:17:23.554 "name": "Null4", 00:17:23.554 "nguid": "2430D2E9845A4CD9A6730A03C3295154", 00:17:23.554 "nsid": 1, 00:17:23.554 "uuid": "2430d2e9-845a-4cd9-a673-0a03c3295154" 00:17:23.554 } 00:17:23.554 ], 00:17:23.554 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:17:23.554 "serial_number": "SPDK00000000000004", 00:17:23.554 "subtype": "NVMe" 00:17:23.554 } 00:17:23.554 ] 00:17:23.554 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.554 09:58:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:17:23.554 09:58:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:17:23.554 09:58:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:23.554 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.554 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:23.554 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.554 09:58:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:17:23.554 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.554 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:23.554 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.554 09:58:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:17:23.554 09:58:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:17:23.554 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.554 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:23.554 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.554 09:58:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:17:23.554 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.554 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:23.554 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.554 09:58:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:17:23.554 09:58:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:17:23.554 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.554 09:58:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:23.554 09:58:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.554 09:58:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:17:23.554 09:58:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.554 09:58:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:23.554 09:58:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.554 09:58:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:17:23.554 09:58:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:17:23.554 09:58:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.554 09:58:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:23.554 09:58:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.554 09:58:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:17:23.554 09:58:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.554 09:58:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:23.554 09:58:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.554 09:58:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:17:23.554 09:58:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.554 09:58:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:23.554 09:58:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.554 09:58:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:17:23.554 09:58:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:17:23.554 09:58:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.554 09:58:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:23.554 09:58:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.554 09:58:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:17:23.554 09:58:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:17:23.554 09:58:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:17:23.554 09:58:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:17:23.554 09:58:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:23.554 09:58:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:17:23.554 09:58:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:23.554 09:58:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:17:23.554 09:58:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:23.554 09:58:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:23.814 rmmod nvme_tcp 00:17:23.814 rmmod nvme_fabrics 00:17:23.814 rmmod nvme_keyring 00:17:23.814 09:58:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:23.814 09:58:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:17:23.814 09:58:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:17:23.814 09:58:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 66223 ']' 00:17:23.814 09:58:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 66223 00:17:23.814 09:58:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 66223 ']' 00:17:23.814 09:58:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 66223 00:17:23.814 09:58:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:17:23.814 09:58:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:23.814 09:58:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66223 00:17:23.814 killing process with pid 66223 00:17:23.814 09:58:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:23.814 09:58:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:23.814 09:58:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66223' 00:17:23.814 09:58:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 66223 00:17:23.814 09:58:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 66223 00:17:24.073 09:58:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:24.073 09:58:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:24.073 09:58:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:24.073 09:58:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:24.073 09:58:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:24.073 09:58:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:24.073 09:58:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:24.073 09:58:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:24.073 09:58:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:24.073 00:17:24.073 real 0m2.403s 00:17:24.073 user 0m6.532s 00:17:24.073 sys 0m0.615s 00:17:24.073 09:58:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:24.073 09:58:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:24.073 ************************************ 00:17:24.073 END TEST nvmf_target_discovery 00:17:24.073 ************************************ 00:17:24.073 09:58:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:24.073 09:58:37 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:17:24.073 09:58:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:24.073 09:58:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:24.073 09:58:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:24.073 ************************************ 00:17:24.073 START TEST nvmf_referrals 00:17:24.073 ************************************ 00:17:24.073 09:58:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:17:24.333 * Looking for test storage... 00:17:24.333 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:24.333 09:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:24.333 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:17:24.333 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:24.333 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:24.333 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:24.333 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:24.333 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:24.333 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:24.333 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:24.333 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:24.333 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:24.333 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:24.333 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:17:24.333 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:17:24.333 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:24.333 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:24.333 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:24.333 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:24.333 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:24.333 09:58:37 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:24.333 09:58:37 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:24.333 09:58:37 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:24.333 09:58:37 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.333 09:58:37 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.333 09:58:37 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.333 09:58:37 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:17:24.334 09:58:37 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.334 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:17:24.334 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:24.334 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:24.334 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:24.334 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:24.334 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:24.334 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:24.334 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:24.334 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:24.334 09:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:17:24.334 09:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:17:24.334 09:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:17:24.334 09:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:17:24.334 09:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:17:24.334 09:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:17:24.334 09:58:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:17:24.334 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:24.334 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:24.334 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:24.334 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:24.334 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:24.334 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:24.334 09:58:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:24.334 09:58:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:24.334 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:24.334 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:24.334 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:24.334 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:24.334 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:24.334 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:24.334 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:24.334 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:24.334 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:24.334 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:24.334 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:24.334 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:24.334 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:24.334 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:24.334 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:24.334 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:24.334 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:24.334 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:24.334 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:24.334 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:24.334 Cannot find device "nvmf_tgt_br" 00:17:24.334 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@155 -- # true 00:17:24.334 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:24.334 Cannot find device "nvmf_tgt_br2" 00:17:24.334 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@156 -- # true 00:17:24.334 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:24.334 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:24.334 Cannot find device "nvmf_tgt_br" 00:17:24.334 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@158 -- # true 00:17:24.334 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:24.334 Cannot find device "nvmf_tgt_br2" 00:17:24.334 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@159 -- # true 00:17:24.334 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:24.334 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:24.334 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:24.334 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:24.334 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@162 -- # true 00:17:24.334 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:24.334 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:24.334 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@163 -- # true 00:17:24.334 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:24.334 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:24.334 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:24.334 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:24.594 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:24.594 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:24.594 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:24.594 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:24.594 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:24.594 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:24.594 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:24.594 09:58:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:24.594 09:58:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:24.594 09:58:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:24.594 09:58:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:24.594 09:58:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:24.594 09:58:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:24.594 09:58:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:24.594 09:58:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:24.594 09:58:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:24.594 09:58:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:24.594 09:58:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:24.594 09:58:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:24.594 09:58:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:24.594 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:24.594 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:17:24.594 00:17:24.594 --- 10.0.0.2 ping statistics --- 00:17:24.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:24.594 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:17:24.594 09:58:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:24.594 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:24.594 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.123 ms 00:17:24.594 00:17:24.594 --- 10.0.0.3 ping statistics --- 00:17:24.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:24.595 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:17:24.595 09:58:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:24.595 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:24.595 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:17:24.595 00:17:24.595 --- 10.0.0.1 ping statistics --- 00:17:24.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:24.595 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:17:24.595 09:58:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:24.595 09:58:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@433 -- # return 0 00:17:24.595 09:58:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:24.595 09:58:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:24.595 09:58:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:24.595 09:58:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:24.595 09:58:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:24.595 09:58:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:24.595 09:58:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:24.595 09:58:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:17:24.595 09:58:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:24.595 09:58:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:24.595 09:58:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:24.595 09:58:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=66449 00:17:24.595 09:58:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 66449 00:17:24.595 09:58:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:24.595 09:58:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 66449 ']' 00:17:24.595 09:58:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:24.595 09:58:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:24.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:24.595 09:58:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:24.595 09:58:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:24.595 09:58:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:24.595 [2024-07-15 09:58:38.166072] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:17:24.595 [2024-07-15 09:58:38.166165] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:24.861 [2024-07-15 09:58:38.305092] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:24.861 [2024-07-15 09:58:38.413894] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:24.861 [2024-07-15 09:58:38.413952] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:24.861 [2024-07-15 09:58:38.413963] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:24.861 [2024-07-15 09:58:38.413970] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:24.861 [2024-07-15 09:58:38.413978] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:24.861 [2024-07-15 09:58:38.414155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:24.861 [2024-07-15 09:58:38.414362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:24.861 [2024-07-15 09:58:38.414449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:24.861 [2024-07-15 09:58:38.414454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:25.798 09:58:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:25.798 09:58:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:17:25.798 09:58:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:25.798 09:58:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:25.798 09:58:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:25.798 09:58:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:25.798 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:25.798 09:58:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.798 09:58:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:25.798 [2024-07-15 09:58:39.179204] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:25.798 09:58:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.798 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:17:25.798 09:58:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.798 09:58:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:25.798 [2024-07-15 09:58:39.205408] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:17:25.798 09:58:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.798 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:17:25.798 09:58:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.798 09:58:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:25.798 09:58:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.798 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:17:25.798 09:58:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.798 09:58:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:25.798 09:58:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.798 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:17:25.798 09:58:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.798 09:58:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:25.798 09:58:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.798 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:17:25.798 09:58:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.798 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:17:25.798 09:58:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:25.798 09:58:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.798 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:17:25.798 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:17:25.798 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:17:25.798 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:17:25.798 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:17:25.798 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:17:25.798 09:58:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.798 09:58:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:25.798 09:58:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.798 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:17:25.798 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:17:25.798 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:17:25.798 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:17:25.798 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:17:25.798 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:17:25.798 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid=a2b6b25a-cc90-4aea-9f09-c06f8a634aec -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:25.798 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:17:26.055 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:17:26.055 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:17:26.055 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:17:26.055 09:58:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.055 09:58:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:26.055 09:58:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.055 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:17:26.055 09:58:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.055 09:58:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:26.055 09:58:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.055 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:17:26.055 09:58:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.055 09:58:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:26.055 09:58:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.055 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:17:26.055 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:17:26.055 09:58:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.055 09:58:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:26.055 09:58:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.055 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:17:26.055 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:17:26.055 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:17:26.055 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:17:26.055 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:17:26.055 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:17:26.055 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid=a2b6b25a-cc90-4aea-9f09-c06f8a634aec -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:26.055 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:17:26.055 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:17:26.055 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:17:26.055 09:58:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.055 09:58:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:26.312 09:58:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.312 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:17:26.312 09:58:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.312 09:58:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:26.312 09:58:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.312 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:17:26.312 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:17:26.312 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:17:26.312 09:58:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.312 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:17:26.312 09:58:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:26.312 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:17:26.312 09:58:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.312 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:17:26.312 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:17:26.312 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:17:26.312 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:17:26.312 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:17:26.312 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:17:26.312 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid=a2b6b25a-cc90-4aea-9f09-c06f8a634aec -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:26.312 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:17:26.312 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:17:26.312 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:17:26.312 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:17:26.312 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:17:26.312 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:17:26.312 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid=a2b6b25a-cc90-4aea-9f09-c06f8a634aec -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:26.312 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:17:26.312 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:17:26.312 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:17:26.312 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:17:26.312 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:17:26.312 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:17:26.312 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid=a2b6b25a-cc90-4aea-9f09-c06f8a634aec -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:26.571 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:17:26.571 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:17:26.571 09:58:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.571 09:58:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:26.571 09:58:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.571 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:17:26.571 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:17:26.571 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:17:26.571 09:58:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.571 09:58:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:26.571 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:17:26.571 09:58:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:17:26.571 09:58:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.571 09:58:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:17:26.571 09:58:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:17:26.571 09:58:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:17:26.571 09:58:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:17:26.571 09:58:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:17:26.571 09:58:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid=a2b6b25a-cc90-4aea-9f09-c06f8a634aec -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:26.571 09:58:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:17:26.571 09:58:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:17:26.571 09:58:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:17:26.571 09:58:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:17:26.571 09:58:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:17:26.571 09:58:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:17:26.571 09:58:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:17:26.571 09:58:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid=a2b6b25a-cc90-4aea-9f09-c06f8a634aec -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:26.571 09:58:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:17:26.830 09:58:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:17:26.830 09:58:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:17:26.830 09:58:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:17:26.830 09:58:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:17:26.830 09:58:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid=a2b6b25a-cc90-4aea-9f09-c06f8a634aec -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:26.830 09:58:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:17:26.830 09:58:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:17:26.830 09:58:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:17:26.830 09:58:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.830 09:58:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:26.830 09:58:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.830 09:58:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:17:26.830 09:58:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.830 09:58:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:26.830 09:58:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:17:26.830 09:58:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.830 09:58:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:17:26.830 09:58:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:17:26.830 09:58:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:17:26.830 09:58:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:17:26.830 09:58:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:17:26.830 09:58:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid=a2b6b25a-cc90-4aea-9f09-c06f8a634aec -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:26.830 09:58:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:17:26.830 09:58:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:17:26.830 09:58:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:17:26.830 09:58:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:17:26.830 09:58:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:17:26.830 09:58:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:26.830 09:58:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:17:27.089 09:58:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:27.089 09:58:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:17:27.089 09:58:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:27.089 09:58:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:27.089 rmmod nvme_tcp 00:17:27.089 rmmod nvme_fabrics 00:17:27.089 rmmod nvme_keyring 00:17:27.089 09:58:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:27.089 09:58:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:17:27.089 09:58:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:17:27.089 09:58:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 66449 ']' 00:17:27.089 09:58:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 66449 00:17:27.089 09:58:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 66449 ']' 00:17:27.089 09:58:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 66449 00:17:27.089 09:58:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:17:27.089 09:58:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:27.089 09:58:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66449 00:17:27.089 09:58:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:27.089 09:58:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:27.089 09:58:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66449' 00:17:27.089 killing process with pid 66449 00:17:27.089 09:58:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 66449 00:17:27.089 09:58:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 66449 00:17:27.348 09:58:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:27.348 09:58:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:27.348 09:58:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:27.348 09:58:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:27.348 09:58:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:27.348 09:58:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:27.348 09:58:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:27.348 09:58:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.348 09:58:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:27.348 00:17:27.348 real 0m3.245s 00:17:27.348 user 0m10.383s 00:17:27.348 sys 0m0.939s 00:17:27.348 09:58:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:27.348 09:58:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:27.348 ************************************ 00:17:27.348 END TEST nvmf_referrals 00:17:27.348 ************************************ 00:17:27.348 09:58:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:27.348 09:58:40 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:17:27.348 09:58:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:27.348 09:58:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:27.348 09:58:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:27.348 ************************************ 00:17:27.348 START TEST nvmf_connect_disconnect 00:17:27.348 ************************************ 00:17:27.348 09:58:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:17:27.608 * Looking for test storage... 00:17:27.608 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:27.608 09:58:40 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:27.608 09:58:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:17:27.608 09:58:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:27.608 09:58:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:27.608 09:58:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:27.608 09:58:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:27.608 09:58:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:27.608 09:58:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:27.608 09:58:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:27.608 09:58:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:27.608 09:58:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:27.608 09:58:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:27.608 Cannot find device "nvmf_tgt_br" 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # true 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:27.608 Cannot find device "nvmf_tgt_br2" 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # true 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:27.608 Cannot find device "nvmf_tgt_br" 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # true 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:27.608 Cannot find device "nvmf_tgt_br2" 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # true 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:27.608 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # true 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:27.608 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # true 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:27.608 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:27.867 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:27.867 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:27.867 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:27.867 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:27.867 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:27.867 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:27.867 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:27.867 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:27.867 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:27.867 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:27.867 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:27.867 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:27.867 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:27.867 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:27.867 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:27.867 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:27.867 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:27.867 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:27.867 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:27.867 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:27.867 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:27.867 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:27.867 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:27.867 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:17:27.867 00:17:27.867 --- 10.0.0.2 ping statistics --- 00:17:27.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.867 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:17:27.867 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:27.867 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:27.867 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:17:27.867 00:17:27.867 --- 10.0.0.3 ping statistics --- 00:17:27.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.867 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:17:27.867 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:27.867 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:27.867 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:17:27.867 00:17:27.867 --- 10.0.0.1 ping statistics --- 00:17:27.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.867 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:17:27.867 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:27.867 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@433 -- # return 0 00:17:27.867 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:27.867 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:27.867 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:27.868 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:27.868 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:27.868 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:27.868 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:27.868 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:17:27.868 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:27.868 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:27.868 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:27.868 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:27.868 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=66757 00:17:27.868 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 66757 00:17:27.868 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 66757 ']' 00:17:27.868 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:27.868 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:27.868 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:27.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:27.868 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:27.868 09:58:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:27.868 [2024-07-15 09:58:41.431649] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:17:27.868 [2024-07-15 09:58:41.431726] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:28.126 [2024-07-15 09:58:41.571462] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:28.126 [2024-07-15 09:58:41.680670] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:28.126 [2024-07-15 09:58:41.680726] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:28.126 [2024-07-15 09:58:41.680733] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:28.126 [2024-07-15 09:58:41.680738] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:28.126 [2024-07-15 09:58:41.680743] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:28.126 [2024-07-15 09:58:41.680946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:28.126 [2024-07-15 09:58:41.681029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:28.126 [2024-07-15 09:58:41.681320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.126 [2024-07-15 09:58:41.681322] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:29.063 09:58:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:29.063 09:58:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:17:29.063 09:58:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:29.063 09:58:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:29.063 09:58:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:29.063 09:58:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:29.063 09:58:42 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:17:29.063 09:58:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.063 09:58:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:29.063 [2024-07-15 09:58:42.400948] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:29.063 09:58:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.063 09:58:42 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:17:29.063 09:58:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.063 09:58:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:29.063 09:58:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.063 09:58:42 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:17:29.063 09:58:42 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:29.063 09:58:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.063 09:58:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:29.063 09:58:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.063 09:58:42 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:29.063 09:58:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.063 09:58:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:29.063 09:58:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.063 09:58:42 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:29.063 09:58:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.063 09:58:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:29.063 [2024-07-15 09:58:42.479866] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:29.063 09:58:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.063 09:58:42 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:17:29.063 09:58:42 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:17:29.063 09:58:42 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:17:31.596 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:33.521 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:36.055 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:37.963 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:40.496 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:40.496 09:58:53 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:17:40.496 09:58:53 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:17:40.496 09:58:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:40.496 09:58:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:17:40.496 09:58:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:40.496 09:58:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:17:40.496 09:58:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:40.496 09:58:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:40.496 rmmod nvme_tcp 00:17:40.496 rmmod nvme_fabrics 00:17:40.496 rmmod nvme_keyring 00:17:40.496 09:58:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:40.496 09:58:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:17:40.496 09:58:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:17:40.496 09:58:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 66757 ']' 00:17:40.496 09:58:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 66757 00:17:40.496 09:58:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 66757 ']' 00:17:40.496 09:58:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 66757 00:17:40.496 09:58:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:17:40.496 09:58:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:40.496 09:58:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66757 00:17:40.496 09:58:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:40.496 09:58:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:40.496 killing process with pid 66757 00:17:40.496 09:58:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66757' 00:17:40.496 09:58:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 66757 00:17:40.496 09:58:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 66757 00:17:40.755 09:58:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:40.755 09:58:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:40.755 09:58:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:40.755 09:58:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:40.755 09:58:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:40.755 09:58:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:40.755 09:58:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:40.755 09:58:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:40.755 09:58:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:40.755 00:17:40.755 real 0m13.303s 00:17:40.755 user 0m49.357s 00:17:40.755 sys 0m1.464s 00:17:40.755 09:58:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:40.755 09:58:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:40.755 ************************************ 00:17:40.755 END TEST nvmf_connect_disconnect 00:17:40.755 ************************************ 00:17:40.755 09:58:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:40.755 09:58:54 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:17:40.755 09:58:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:40.755 09:58:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:40.755 09:58:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:40.755 ************************************ 00:17:40.755 START TEST nvmf_multitarget 00:17:40.755 ************************************ 00:17:40.755 09:58:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:17:41.015 * Looking for test storage... 00:17:41.015 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:41.015 09:58:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:41.015 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:17:41.015 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:41.015 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:41.015 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:41.015 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:41.015 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:41.015 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:41.015 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:41.015 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:41.015 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:41.015 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:41.015 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:17:41.015 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:17:41.015 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:41.015 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:41.015 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:41.015 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:41.015 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:41.015 09:58:54 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:41.015 09:58:54 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:41.015 09:58:54 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:41.015 09:58:54 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.015 09:58:54 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.015 09:58:54 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.015 09:58:54 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:17:41.015 09:58:54 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.015 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:17:41.015 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:41.015 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:41.015 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:41.015 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:41.015 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:41.015 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:41.015 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:41.015 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:41.015 09:58:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:17:41.015 09:58:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:17:41.015 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:41.015 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:41.015 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:41.015 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:41.015 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:41.016 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:41.016 09:58:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:41.016 09:58:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:41.016 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:41.016 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:41.016 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:41.016 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:41.016 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:41.016 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:41.016 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:41.016 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:41.016 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:41.016 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:41.016 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:41.016 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:41.016 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:41.016 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:41.016 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:41.016 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:41.016 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:41.016 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:41.016 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:41.016 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:41.016 Cannot find device "nvmf_tgt_br" 00:17:41.016 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@155 -- # true 00:17:41.016 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:41.016 Cannot find device "nvmf_tgt_br2" 00:17:41.016 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@156 -- # true 00:17:41.016 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:41.016 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:41.016 Cannot find device "nvmf_tgt_br" 00:17:41.016 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@158 -- # true 00:17:41.016 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:41.016 Cannot find device "nvmf_tgt_br2" 00:17:41.016 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@159 -- # true 00:17:41.016 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:41.016 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:41.016 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:41.016 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:41.016 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@162 -- # true 00:17:41.016 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:41.016 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:41.016 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@163 -- # true 00:17:41.016 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:41.016 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:41.016 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:41.016 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:41.016 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:41.016 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:41.016 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:41.016 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:41.273 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:41.273 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:41.273 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:41.273 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:41.274 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:41.274 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:41.274 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:41.274 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:41.274 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:41.274 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:41.274 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:41.274 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:41.274 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:41.274 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:41.274 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:41.274 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:41.274 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:41.274 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:17:41.274 00:17:41.274 --- 10.0.0.2 ping statistics --- 00:17:41.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.274 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:17:41.274 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:41.274 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:41.274 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.093 ms 00:17:41.274 00:17:41.274 --- 10.0.0.3 ping statistics --- 00:17:41.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.274 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:17:41.274 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:41.274 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:41.274 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:17:41.274 00:17:41.274 --- 10.0.0.1 ping statistics --- 00:17:41.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.274 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:17:41.274 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:41.274 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@433 -- # return 0 00:17:41.274 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:41.274 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:41.274 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:41.274 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:41.274 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:41.274 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:41.274 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:41.274 09:58:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:17:41.274 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:41.274 09:58:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:41.274 09:58:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:41.274 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=67160 00:17:41.274 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:41.274 09:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 67160 00:17:41.274 09:58:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 67160 ']' 00:17:41.274 09:58:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:41.274 09:58:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:41.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:41.274 09:58:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:41.274 09:58:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:41.274 09:58:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:41.274 [2024-07-15 09:58:54.819897] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:17:41.274 [2024-07-15 09:58:54.819963] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:41.531 [2024-07-15 09:58:54.958218] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:41.531 [2024-07-15 09:58:55.062498] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:41.531 [2024-07-15 09:58:55.062547] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:41.531 [2024-07-15 09:58:55.062553] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:41.531 [2024-07-15 09:58:55.062557] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:41.531 [2024-07-15 09:58:55.062561] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:41.531 [2024-07-15 09:58:55.062749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:41.531 [2024-07-15 09:58:55.062962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:41.531 [2024-07-15 09:58:55.063280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:41.531 [2024-07-15 09:58:55.063284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:42.468 09:58:55 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:42.468 09:58:55 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:17:42.468 09:58:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:42.468 09:58:55 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:42.468 09:58:55 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:42.468 09:58:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:42.468 09:58:55 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:42.468 09:58:55 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:42.468 09:58:55 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:17:42.468 09:58:55 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:17:42.468 09:58:55 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:17:42.468 "nvmf_tgt_1" 00:17:42.468 09:58:55 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:17:42.726 "nvmf_tgt_2" 00:17:42.726 09:58:56 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:42.726 09:58:56 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:17:42.726 09:58:56 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:17:42.726 09:58:56 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:17:42.726 true 00:17:42.984 09:58:56 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:17:42.984 true 00:17:42.984 09:58:56 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:42.984 09:58:56 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:17:42.984 09:58:56 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:17:42.984 09:58:56 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:17:42.984 09:58:56 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:17:42.984 09:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:42.984 09:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:17:43.242 09:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:43.242 09:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:17:43.242 09:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:43.242 09:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:43.242 rmmod nvme_tcp 00:17:43.242 rmmod nvme_fabrics 00:17:43.242 rmmod nvme_keyring 00:17:43.242 09:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:43.242 09:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:17:43.242 09:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:17:43.242 09:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 67160 ']' 00:17:43.242 09:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 67160 00:17:43.242 09:58:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 67160 ']' 00:17:43.242 09:58:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 67160 00:17:43.242 09:58:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:17:43.242 09:58:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:43.242 09:58:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67160 00:17:43.242 09:58:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:43.242 09:58:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:43.242 killing process with pid 67160 00:17:43.242 09:58:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67160' 00:17:43.242 09:58:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 67160 00:17:43.242 09:58:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 67160 00:17:43.500 09:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:43.500 09:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:43.500 09:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:43.500 09:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:43.500 09:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:43.500 09:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:43.500 09:58:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:43.500 09:58:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:43.500 09:58:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:43.500 00:17:43.500 real 0m2.755s 00:17:43.500 user 0m8.310s 00:17:43.500 sys 0m0.738s 00:17:43.500 09:58:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:43.500 09:58:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:43.500 ************************************ 00:17:43.500 END TEST nvmf_multitarget 00:17:43.500 ************************************ 00:17:43.500 09:58:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:43.500 09:58:57 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:43.500 09:58:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:43.500 09:58:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:43.500 09:58:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:43.500 ************************************ 00:17:43.500 START TEST nvmf_rpc 00:17:43.500 ************************************ 00:17:43.500 09:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:43.773 * Looking for test storage... 00:17:43.773 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:43.773 09:58:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:43.773 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:17:43.773 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:43.773 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:43.773 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:43.773 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:43.773 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:43.773 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:43.773 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:43.773 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:43.773 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:43.773 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:43.773 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:17:43.773 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:17:43.773 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:43.773 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:43.773 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:43.773 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:43.773 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:43.773 09:58:57 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:43.773 09:58:57 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:43.773 09:58:57 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:43.773 09:58:57 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.774 09:58:57 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.774 09:58:57 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.774 09:58:57 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:17:43.774 09:58:57 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.774 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:17:43.774 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:43.774 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:43.774 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:43.774 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:43.774 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:43.774 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:43.774 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:43.774 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:43.774 09:58:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:17:43.774 09:58:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:17:43.774 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:43.774 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:43.774 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:43.774 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:43.774 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:43.774 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:43.774 09:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:43.774 09:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:43.774 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:43.774 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:43.774 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:43.774 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:43.774 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:43.774 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:43.774 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:43.774 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:43.774 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:43.774 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:43.774 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:43.774 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:43.774 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:43.774 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:43.774 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:43.774 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:43.774 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:43.774 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:43.774 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:43.774 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:43.774 Cannot find device "nvmf_tgt_br" 00:17:43.774 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@155 -- # true 00:17:43.774 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:43.774 Cannot find device "nvmf_tgt_br2" 00:17:43.774 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@156 -- # true 00:17:43.774 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:43.774 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:43.774 Cannot find device "nvmf_tgt_br" 00:17:43.774 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@158 -- # true 00:17:43.774 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:43.774 Cannot find device "nvmf_tgt_br2" 00:17:43.774 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@159 -- # true 00:17:43.774 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:43.774 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:43.774 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:43.774 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:43.774 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@162 -- # true 00:17:43.774 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:43.774 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:43.774 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@163 -- # true 00:17:43.774 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:43.774 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:43.774 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:43.774 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:44.032 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:44.032 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:44.032 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:44.032 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:44.032 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:44.032 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:44.032 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:44.032 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:44.032 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:44.032 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:44.032 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:44.032 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:44.032 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:44.032 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:44.032 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:44.032 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:44.032 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:44.032 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:44.032 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:44.033 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:44.033 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:44.033 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:17:44.033 00:17:44.033 --- 10.0.0.2 ping statistics --- 00:17:44.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:44.033 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:17:44.033 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:44.033 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:44.033 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:17:44.033 00:17:44.033 --- 10.0.0.3 ping statistics --- 00:17:44.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:44.033 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:17:44.033 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:44.033 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:44.033 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:17:44.033 00:17:44.033 --- 10.0.0.1 ping statistics --- 00:17:44.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:44.033 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:17:44.033 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:44.033 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@433 -- # return 0 00:17:44.033 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:44.033 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:44.033 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:44.033 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:44.033 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:44.033 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:44.033 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:44.033 09:58:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:17:44.033 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:44.033 09:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:44.033 09:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:44.033 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:44.033 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=67385 00:17:44.033 09:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 67385 00:17:44.033 09:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 67385 ']' 00:17:44.033 09:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:44.033 09:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:44.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:44.033 09:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:44.033 09:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:44.033 09:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:44.033 [2024-07-15 09:58:57.589965] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:17:44.033 [2024-07-15 09:58:57.590029] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:44.292 [2024-07-15 09:58:57.733328] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:44.292 [2024-07-15 09:58:57.841897] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:44.292 [2024-07-15 09:58:57.842088] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:44.292 [2024-07-15 09:58:57.842154] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:44.292 [2024-07-15 09:58:57.842196] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:44.292 [2024-07-15 09:58:57.842236] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:44.292 [2024-07-15 09:58:57.842497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:44.292 [2024-07-15 09:58:57.842655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.292 [2024-07-15 09:58:57.842603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:44.292 [2024-07-15 09:58:57.842692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:45.225 09:58:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:45.225 09:58:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:17:45.225 09:58:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:45.225 09:58:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:45.225 09:58:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:45.225 09:58:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:45.225 09:58:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:17:45.225 09:58:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.225 09:58:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:45.225 09:58:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.225 09:58:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:17:45.225 "poll_groups": [ 00:17:45.225 { 00:17:45.225 "admin_qpairs": 0, 00:17:45.225 "completed_nvme_io": 0, 00:17:45.225 "current_admin_qpairs": 0, 00:17:45.225 "current_io_qpairs": 0, 00:17:45.225 "io_qpairs": 0, 00:17:45.225 "name": "nvmf_tgt_poll_group_000", 00:17:45.225 "pending_bdev_io": 0, 00:17:45.225 "transports": [] 00:17:45.226 }, 00:17:45.226 { 00:17:45.226 "admin_qpairs": 0, 00:17:45.226 "completed_nvme_io": 0, 00:17:45.226 "current_admin_qpairs": 0, 00:17:45.226 "current_io_qpairs": 0, 00:17:45.226 "io_qpairs": 0, 00:17:45.226 "name": "nvmf_tgt_poll_group_001", 00:17:45.226 "pending_bdev_io": 0, 00:17:45.226 "transports": [] 00:17:45.226 }, 00:17:45.226 { 00:17:45.226 "admin_qpairs": 0, 00:17:45.226 "completed_nvme_io": 0, 00:17:45.226 "current_admin_qpairs": 0, 00:17:45.226 "current_io_qpairs": 0, 00:17:45.226 "io_qpairs": 0, 00:17:45.226 "name": "nvmf_tgt_poll_group_002", 00:17:45.226 "pending_bdev_io": 0, 00:17:45.226 "transports": [] 00:17:45.226 }, 00:17:45.226 { 00:17:45.226 "admin_qpairs": 0, 00:17:45.226 "completed_nvme_io": 0, 00:17:45.226 "current_admin_qpairs": 0, 00:17:45.226 "current_io_qpairs": 0, 00:17:45.226 "io_qpairs": 0, 00:17:45.226 "name": "nvmf_tgt_poll_group_003", 00:17:45.226 "pending_bdev_io": 0, 00:17:45.226 "transports": [] 00:17:45.226 } 00:17:45.226 ], 00:17:45.226 "tick_rate": 2290000000 00:17:45.226 }' 00:17:45.226 09:58:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:17:45.226 09:58:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:17:45.226 09:58:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:17:45.226 09:58:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:17:45.226 09:58:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:17:45.226 09:58:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:17:45.226 09:58:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:17:45.226 09:58:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:45.226 09:58:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.226 09:58:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:45.226 [2024-07-15 09:58:58.621771] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:45.226 09:58:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.226 09:58:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:17:45.226 09:58:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.226 09:58:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:45.226 09:58:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.226 09:58:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:17:45.226 "poll_groups": [ 00:17:45.226 { 00:17:45.226 "admin_qpairs": 0, 00:17:45.226 "completed_nvme_io": 0, 00:17:45.226 "current_admin_qpairs": 0, 00:17:45.226 "current_io_qpairs": 0, 00:17:45.226 "io_qpairs": 0, 00:17:45.226 "name": "nvmf_tgt_poll_group_000", 00:17:45.226 "pending_bdev_io": 0, 00:17:45.226 "transports": [ 00:17:45.226 { 00:17:45.226 "trtype": "TCP" 00:17:45.226 } 00:17:45.226 ] 00:17:45.226 }, 00:17:45.226 { 00:17:45.226 "admin_qpairs": 0, 00:17:45.226 "completed_nvme_io": 0, 00:17:45.226 "current_admin_qpairs": 0, 00:17:45.226 "current_io_qpairs": 0, 00:17:45.226 "io_qpairs": 0, 00:17:45.226 "name": "nvmf_tgt_poll_group_001", 00:17:45.226 "pending_bdev_io": 0, 00:17:45.226 "transports": [ 00:17:45.226 { 00:17:45.226 "trtype": "TCP" 00:17:45.226 } 00:17:45.226 ] 00:17:45.226 }, 00:17:45.226 { 00:17:45.226 "admin_qpairs": 0, 00:17:45.226 "completed_nvme_io": 0, 00:17:45.226 "current_admin_qpairs": 0, 00:17:45.226 "current_io_qpairs": 0, 00:17:45.226 "io_qpairs": 0, 00:17:45.226 "name": "nvmf_tgt_poll_group_002", 00:17:45.226 "pending_bdev_io": 0, 00:17:45.226 "transports": [ 00:17:45.226 { 00:17:45.226 "trtype": "TCP" 00:17:45.226 } 00:17:45.226 ] 00:17:45.226 }, 00:17:45.226 { 00:17:45.226 "admin_qpairs": 0, 00:17:45.226 "completed_nvme_io": 0, 00:17:45.226 "current_admin_qpairs": 0, 00:17:45.226 "current_io_qpairs": 0, 00:17:45.226 "io_qpairs": 0, 00:17:45.226 "name": "nvmf_tgt_poll_group_003", 00:17:45.226 "pending_bdev_io": 0, 00:17:45.226 "transports": [ 00:17:45.226 { 00:17:45.226 "trtype": "TCP" 00:17:45.226 } 00:17:45.226 ] 00:17:45.226 } 00:17:45.226 ], 00:17:45.226 "tick_rate": 2290000000 00:17:45.226 }' 00:17:45.226 09:58:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:17:45.226 09:58:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:45.226 09:58:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:45.226 09:58:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:45.226 09:58:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:17:45.226 09:58:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:17:45.226 09:58:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:45.226 09:58:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:45.226 09:58:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:45.226 09:58:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:17:45.226 09:58:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:17:45.226 09:58:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:17:45.226 09:58:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:17:45.226 09:58:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:45.226 09:58:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.226 09:58:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:45.226 Malloc1 00:17:45.226 09:58:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.226 09:58:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:45.226 09:58:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.227 09:58:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:45.227 09:58:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.227 09:58:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:45.227 09:58:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.227 09:58:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:45.227 09:58:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.227 09:58:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:17:45.227 09:58:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.227 09:58:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:45.227 09:58:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.227 09:58:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:45.227 09:58:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.227 09:58:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:45.227 [2024-07-15 09:58:58.802200] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:45.227 09:58:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.227 09:58:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid=a2b6b25a-cc90-4aea-9f09-c06f8a634aec -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -a 10.0.0.2 -s 4420 00:17:45.227 09:58:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:17:45.227 09:58:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid=a2b6b25a-cc90-4aea-9f09-c06f8a634aec -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -a 10.0.0.2 -s 4420 00:17:45.227 09:58:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:17:45.486 09:58:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:45.486 09:58:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:17:45.486 09:58:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:45.486 09:58:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:17:45.486 09:58:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:45.486 09:58:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:17:45.486 09:58:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:17:45.486 09:58:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid=a2b6b25a-cc90-4aea-9f09-c06f8a634aec -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -a 10.0.0.2 -s 4420 00:17:45.486 [2024-07-15 09:58:58.828442] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec' 00:17:45.486 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:45.486 could not add new controller: failed to write to nvme-fabrics device 00:17:45.486 09:58:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:17:45.486 09:58:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:45.486 09:58:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:45.486 09:58:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:45.486 09:58:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:17:45.486 09:58:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.486 09:58:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:45.486 09:58:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.486 09:58:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid=a2b6b25a-cc90-4aea-9f09-c06f8a634aec -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:45.486 09:58:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:17:45.486 09:58:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:45.486 09:58:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:45.486 09:58:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:45.486 09:58:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:48.021 09:59:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:48.021 09:59:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:48.021 09:59:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:48.021 09:59:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:48.021 09:59:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:48.021 09:59:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:48.021 09:59:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:48.021 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:48.021 09:59:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:48.021 09:59:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:48.021 09:59:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:48.021 09:59:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:48.021 09:59:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:48.021 09:59:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:48.021 09:59:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:48.021 09:59:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:17:48.021 09:59:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.021 09:59:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:48.021 09:59:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.021 09:59:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid=a2b6b25a-cc90-4aea-9f09-c06f8a634aec -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:48.021 09:59:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:17:48.021 09:59:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid=a2b6b25a-cc90-4aea-9f09-c06f8a634aec -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:48.021 09:59:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:17:48.021 09:59:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:48.021 09:59:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:17:48.022 09:59:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:48.022 09:59:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:17:48.022 09:59:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:48.022 09:59:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:17:48.022 09:59:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:17:48.022 09:59:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid=a2b6b25a-cc90-4aea-9f09-c06f8a634aec -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:48.022 [2024-07-15 09:59:01.246741] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec' 00:17:48.022 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:48.022 could not add new controller: failed to write to nvme-fabrics device 00:17:48.022 09:59:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:17:48.022 09:59:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:48.022 09:59:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:48.022 09:59:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:48.022 09:59:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:17:48.022 09:59:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.022 09:59:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:48.022 09:59:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.022 09:59:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid=a2b6b25a-cc90-4aea-9f09-c06f8a634aec -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:48.022 09:59:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:17:48.022 09:59:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:48.022 09:59:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:48.022 09:59:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:48.022 09:59:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:49.936 09:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:49.936 09:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:49.936 09:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:49.936 09:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:49.936 09:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:49.936 09:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:49.936 09:59:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:49.936 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:49.936 09:59:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:49.936 09:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:49.936 09:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:49.936 09:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:49.936 09:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:49.936 09:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:50.194 09:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:50.194 09:59:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:50.194 09:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.194 09:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:50.194 09:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.194 09:59:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:17:50.194 09:59:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:50.194 09:59:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:50.194 09:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.194 09:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:50.194 09:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.194 09:59:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:50.194 09:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.194 09:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:50.194 [2024-07-15 09:59:03.569173] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:50.194 09:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.194 09:59:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:50.194 09:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.194 09:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:50.194 09:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.194 09:59:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:50.194 09:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.194 09:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:50.194 09:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.194 09:59:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid=a2b6b25a-cc90-4aea-9f09-c06f8a634aec -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:50.194 09:59:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:50.194 09:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:50.194 09:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:50.194 09:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:50.194 09:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:52.737 09:59:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:52.737 09:59:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:52.737 09:59:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:52.737 09:59:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:52.737 09:59:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:52.737 09:59:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:52.737 09:59:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:52.737 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:52.737 09:59:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:52.737 09:59:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:52.737 09:59:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:52.737 09:59:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:52.737 09:59:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:52.737 09:59:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:52.737 09:59:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:52.737 09:59:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:52.737 09:59:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.737 09:59:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:52.737 09:59:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.737 09:59:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:52.737 09:59:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.737 09:59:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:52.737 09:59:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.737 09:59:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:52.737 09:59:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:52.737 09:59:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.737 09:59:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:52.737 09:59:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.737 09:59:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:52.737 09:59:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.737 09:59:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:52.737 [2024-07-15 09:59:05.908142] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:52.737 09:59:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.737 09:59:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:52.737 09:59:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.737 09:59:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:52.737 09:59:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.737 09:59:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:52.737 09:59:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.737 09:59:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:52.737 09:59:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.737 09:59:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid=a2b6b25a-cc90-4aea-9f09-c06f8a634aec -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:52.737 09:59:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:52.737 09:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:52.737 09:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:52.737 09:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:52.737 09:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:54.638 09:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:54.638 09:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:54.638 09:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:54.638 09:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:54.638 09:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:54.638 09:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:54.638 09:59:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:54.907 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:54.907 09:59:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:54.907 09:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:54.907 09:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:54.907 09:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:54.907 09:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:54.907 09:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:54.907 09:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:54.907 09:59:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:54.907 09:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.907 09:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.907 09:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.907 09:59:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:54.907 09:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.907 09:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.907 09:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.907 09:59:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:54.907 09:59:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:54.907 09:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.907 09:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.907 09:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.907 09:59:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:54.907 09:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.907 09:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.907 [2024-07-15 09:59:08.343257] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:54.907 09:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.907 09:59:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:54.907 09:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.907 09:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.907 09:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.907 09:59:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:54.907 09:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.907 09:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.907 09:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.907 09:59:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid=a2b6b25a-cc90-4aea-9f09-c06f8a634aec -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:55.180 09:59:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:55.180 09:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:55.180 09:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:55.180 09:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:55.180 09:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:57.082 09:59:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:57.082 09:59:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:57.082 09:59:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:57.082 09:59:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:57.082 09:59:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:57.082 09:59:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:57.082 09:59:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:57.082 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:57.082 09:59:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:57.082 09:59:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:57.082 09:59:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:57.082 09:59:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:57.082 09:59:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:57.082 09:59:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:57.082 09:59:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:57.082 09:59:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:57.082 09:59:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.082 09:59:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:57.082 09:59:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.082 09:59:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:57.082 09:59:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.082 09:59:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:57.082 09:59:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.082 09:59:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:57.082 09:59:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:57.082 09:59:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.082 09:59:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:57.340 09:59:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.340 09:59:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:57.340 09:59:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.340 09:59:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:57.340 [2024-07-15 09:59:10.674332] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:57.340 09:59:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.340 09:59:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:57.340 09:59:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.340 09:59:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:57.340 09:59:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.340 09:59:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:57.340 09:59:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.340 09:59:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:57.340 09:59:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.340 09:59:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid=a2b6b25a-cc90-4aea-9f09-c06f8a634aec -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:57.340 09:59:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:57.340 09:59:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:57.340 09:59:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:57.340 09:59:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:57.340 09:59:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:59.871 09:59:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:59.872 09:59:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:59.872 09:59:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:59.872 09:59:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:59.872 09:59:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:59.872 09:59:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:59.872 09:59:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:59.872 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:59.872 09:59:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:59.872 09:59:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:59.872 09:59:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:59.872 09:59:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:59.872 09:59:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:59.872 09:59:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:59.872 09:59:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:59.872 09:59:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:59.872 09:59:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.872 09:59:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:59.872 09:59:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.872 09:59:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:59.872 09:59:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.872 09:59:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:59.872 09:59:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.872 09:59:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:59.872 09:59:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:59.872 09:59:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.872 09:59:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:59.872 09:59:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.872 09:59:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:59.872 09:59:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.872 09:59:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:59.872 [2024-07-15 09:59:12.985443] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:59.872 09:59:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.872 09:59:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:59.872 09:59:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.872 09:59:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:59.872 09:59:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.872 09:59:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:59.872 09:59:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.872 09:59:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:59.872 09:59:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.872 09:59:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid=a2b6b25a-cc90-4aea-9f09-c06f8a634aec -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:59.872 09:59:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:59.872 09:59:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:59.872 09:59:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:59.872 09:59:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:59.872 09:59:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:18:01.774 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:01.774 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:01.774 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:01.774 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:01.774 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:01.774 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:18:01.774 09:59:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:01.774 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:01.774 09:59:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:01.774 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:18:01.774 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:01.774 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:01.774 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:01.774 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:01.774 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:18:01.774 09:59:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:01.774 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.774 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:01.774 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.774 09:59:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:01.774 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.774 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:01.774 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.774 09:59:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:18:01.774 09:59:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:01.774 09:59:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:01.774 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.774 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:01.774 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.774 09:59:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:01.774 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.774 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:01.774 [2024-07-15 09:59:15.332378] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:01.774 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.774 09:59:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:01.774 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.774 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:01.774 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.774 09:59:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:01.774 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.774 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.052 [2024-07-15 09:59:15.404292] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.052 [2024-07-15 09:59:15.476218] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.052 [2024-07-15 09:59:15.548208] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.052 [2024-07-15 09:59:15.620180] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.052 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.311 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.311 09:59:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:02.311 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.311 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.311 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.311 09:59:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:02.311 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.311 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.311 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.311 09:59:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:02.311 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.311 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.311 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.311 09:59:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:18:02.311 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.311 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.311 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.311 09:59:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:18:02.311 "poll_groups": [ 00:18:02.311 { 00:18:02.311 "admin_qpairs": 2, 00:18:02.311 "completed_nvme_io": 66, 00:18:02.311 "current_admin_qpairs": 0, 00:18:02.311 "current_io_qpairs": 0, 00:18:02.311 "io_qpairs": 16, 00:18:02.311 "name": "nvmf_tgt_poll_group_000", 00:18:02.311 "pending_bdev_io": 0, 00:18:02.311 "transports": [ 00:18:02.311 { 00:18:02.311 "trtype": "TCP" 00:18:02.311 } 00:18:02.311 ] 00:18:02.311 }, 00:18:02.311 { 00:18:02.311 "admin_qpairs": 3, 00:18:02.311 "completed_nvme_io": 117, 00:18:02.311 "current_admin_qpairs": 0, 00:18:02.311 "current_io_qpairs": 0, 00:18:02.311 "io_qpairs": 17, 00:18:02.311 "name": "nvmf_tgt_poll_group_001", 00:18:02.311 "pending_bdev_io": 0, 00:18:02.311 "transports": [ 00:18:02.311 { 00:18:02.311 "trtype": "TCP" 00:18:02.311 } 00:18:02.311 ] 00:18:02.311 }, 00:18:02.311 { 00:18:02.311 "admin_qpairs": 1, 00:18:02.311 "completed_nvme_io": 167, 00:18:02.311 "current_admin_qpairs": 0, 00:18:02.311 "current_io_qpairs": 0, 00:18:02.311 "io_qpairs": 19, 00:18:02.311 "name": "nvmf_tgt_poll_group_002", 00:18:02.311 "pending_bdev_io": 0, 00:18:02.311 "transports": [ 00:18:02.311 { 00:18:02.311 "trtype": "TCP" 00:18:02.311 } 00:18:02.311 ] 00:18:02.311 }, 00:18:02.311 { 00:18:02.311 "admin_qpairs": 1, 00:18:02.311 "completed_nvme_io": 70, 00:18:02.311 "current_admin_qpairs": 0, 00:18:02.311 "current_io_qpairs": 0, 00:18:02.311 "io_qpairs": 18, 00:18:02.311 "name": "nvmf_tgt_poll_group_003", 00:18:02.311 "pending_bdev_io": 0, 00:18:02.311 "transports": [ 00:18:02.311 { 00:18:02.311 "trtype": "TCP" 00:18:02.311 } 00:18:02.311 ] 00:18:02.311 } 00:18:02.311 ], 00:18:02.311 "tick_rate": 2290000000 00:18:02.311 }' 00:18:02.311 09:59:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:18:02.311 09:59:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:18:02.311 09:59:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:02.311 09:59:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:18:02.311 09:59:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:18:02.311 09:59:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:18:02.311 09:59:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:18:02.311 09:59:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:02.311 09:59:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:18:02.311 09:59:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:18:02.311 09:59:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:18:02.311 09:59:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:18:02.311 09:59:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:18:02.311 09:59:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:02.312 09:59:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:18:02.312 09:59:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:02.312 09:59:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:18:02.312 09:59:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:02.312 09:59:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:02.312 rmmod nvme_tcp 00:18:02.312 rmmod nvme_fabrics 00:18:02.312 rmmod nvme_keyring 00:18:02.312 09:59:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:02.312 09:59:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:18:02.312 09:59:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:18:02.312 09:59:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 67385 ']' 00:18:02.312 09:59:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 67385 00:18:02.312 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 67385 ']' 00:18:02.312 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 67385 00:18:02.312 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:18:02.312 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:02.312 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67385 00:18:02.571 killing process with pid 67385 00:18:02.571 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:02.571 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:02.571 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67385' 00:18:02.571 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 67385 00:18:02.571 09:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 67385 00:18:02.571 09:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:02.571 09:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:02.571 09:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:02.571 09:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:02.571 09:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:02.571 09:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:02.571 09:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:02.571 09:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:02.830 09:59:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:02.830 00:18:02.830 real 0m19.124s 00:18:02.830 user 1m12.938s 00:18:02.830 sys 0m1.857s 00:18:02.830 ************************************ 00:18:02.830 END TEST nvmf_rpc 00:18:02.830 ************************************ 00:18:02.830 09:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:02.830 09:59:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.830 09:59:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:02.830 09:59:16 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:18:02.830 09:59:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:02.830 09:59:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:02.830 09:59:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:02.830 ************************************ 00:18:02.830 START TEST nvmf_invalid 00:18:02.830 ************************************ 00:18:02.830 09:59:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:18:02.830 * Looking for test storage... 00:18:02.830 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:02.830 09:59:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:02.830 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:18:02.830 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:02.830 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:02.830 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:02.830 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:02.830 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:02.830 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:02.830 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:02.830 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:02.830 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:02.830 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:02.830 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:18:02.830 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:18:02.830 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:02.830 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:02.830 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:02.830 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:02.830 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:02.830 09:59:16 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:02.830 09:59:16 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:02.830 09:59:16 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:02.830 09:59:16 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.830 09:59:16 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.831 09:59:16 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.831 09:59:16 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:18:02.831 09:59:16 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.831 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:18:02.831 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:02.831 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:02.831 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:02.831 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:02.831 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:02.831 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:02.831 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:02.831 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:02.831 09:59:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:18:02.831 09:59:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:02.831 09:59:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:18:02.831 09:59:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:18:02.831 09:59:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:18:02.831 09:59:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:18:02.831 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:02.831 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:02.831 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:02.831 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:02.831 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:02.831 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:02.831 09:59:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:02.831 09:59:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:02.831 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:02.831 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:02.831 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:02.831 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:02.831 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:02.831 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:02.831 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:02.831 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:02.831 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:02.831 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:02.831 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:02.831 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:02.831 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:02.831 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:02.831 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:02.831 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:02.831 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:02.831 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:02.831 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:03.091 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:03.091 Cannot find device "nvmf_tgt_br" 00:18:03.091 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@155 -- # true 00:18:03.091 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:03.091 Cannot find device "nvmf_tgt_br2" 00:18:03.091 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@156 -- # true 00:18:03.091 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:03.091 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:03.091 Cannot find device "nvmf_tgt_br" 00:18:03.091 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@158 -- # true 00:18:03.091 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:03.091 Cannot find device "nvmf_tgt_br2" 00:18:03.091 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@159 -- # true 00:18:03.091 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:03.091 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:03.091 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:03.091 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:03.091 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@162 -- # true 00:18:03.091 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:03.091 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:03.091 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@163 -- # true 00:18:03.091 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:03.091 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:03.091 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:03.091 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:03.091 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:03.091 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:03.091 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:03.091 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:03.091 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:03.091 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:03.091 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:03.091 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:03.091 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:03.091 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:03.091 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:03.352 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:03.352 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:03.352 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:03.352 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:03.352 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:03.352 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:03.352 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:03.352 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:03.352 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:03.352 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:03.352 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:18:03.352 00:18:03.352 --- 10.0.0.2 ping statistics --- 00:18:03.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:03.352 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:18:03.352 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:03.352 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:03.352 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.110 ms 00:18:03.352 00:18:03.352 --- 10.0.0.3 ping statistics --- 00:18:03.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:03.352 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:18:03.352 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:03.352 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:03.352 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:18:03.352 00:18:03.352 --- 10.0.0.1 ping statistics --- 00:18:03.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:03.352 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:18:03.352 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:03.352 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@433 -- # return 0 00:18:03.352 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:03.352 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:03.352 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:03.352 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:03.352 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:03.352 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:03.352 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:03.352 09:59:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:18:03.352 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:03.352 09:59:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:03.352 09:59:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:03.352 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=67904 00:18:03.352 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:03.352 09:59:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 67904 00:18:03.352 09:59:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 67904 ']' 00:18:03.352 09:59:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:03.352 09:59:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:03.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:03.352 09:59:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:03.352 09:59:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:03.352 09:59:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:03.352 [2024-07-15 09:59:16.856503] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:18:03.352 [2024-07-15 09:59:16.856576] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:03.611 [2024-07-15 09:59:16.994481] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:03.611 [2024-07-15 09:59:17.095741] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:03.611 [2024-07-15 09:59:17.095787] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:03.611 [2024-07-15 09:59:17.095793] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:03.611 [2024-07-15 09:59:17.095797] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:03.611 [2024-07-15 09:59:17.095801] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:03.611 [2024-07-15 09:59:17.096023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:03.611 [2024-07-15 09:59:17.096212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:03.611 [2024-07-15 09:59:17.096422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:03.611 [2024-07-15 09:59:17.096425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:04.180 09:59:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:04.180 09:59:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:18:04.180 09:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:04.180 09:59:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:04.180 09:59:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:04.439 09:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:04.439 09:59:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:04.439 09:59:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode12103 00:18:04.439 [2024-07-15 09:59:17.980916] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:18:04.439 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='2024/07/15 09:59:17 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode12103 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:18:04.439 request: 00:18:04.439 { 00:18:04.439 "method": "nvmf_create_subsystem", 00:18:04.439 "params": { 00:18:04.439 "nqn": "nqn.2016-06.io.spdk:cnode12103", 00:18:04.439 "tgt_name": "foobar" 00:18:04.439 } 00:18:04.439 } 00:18:04.439 Got JSON-RPC error response 00:18:04.439 GoRPCClient: error on JSON-RPC call' 00:18:04.439 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ 2024/07/15 09:59:17 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode12103 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:18:04.439 request: 00:18:04.439 { 00:18:04.439 "method": "nvmf_create_subsystem", 00:18:04.439 "params": { 00:18:04.439 "nqn": "nqn.2016-06.io.spdk:cnode12103", 00:18:04.439 "tgt_name": "foobar" 00:18:04.439 } 00:18:04.439 } 00:18:04.439 Got JSON-RPC error response 00:18:04.439 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:18:04.439 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:18:04.439 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode31371 00:18:04.697 [2024-07-15 09:59:18.184746] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31371: invalid serial number 'SPDKISFASTANDAWESOME' 00:18:04.697 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='2024/07/15 09:59:18 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode31371 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:18:04.697 request: 00:18:04.697 { 00:18:04.697 "method": "nvmf_create_subsystem", 00:18:04.697 "params": { 00:18:04.697 "nqn": "nqn.2016-06.io.spdk:cnode31371", 00:18:04.697 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:18:04.697 } 00:18:04.697 } 00:18:04.697 Got JSON-RPC error response 00:18:04.697 GoRPCClient: error on JSON-RPC call' 00:18:04.697 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ 2024/07/15 09:59:18 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode31371 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:18:04.697 request: 00:18:04.697 { 00:18:04.697 "method": "nvmf_create_subsystem", 00:18:04.697 "params": { 00:18:04.697 "nqn": "nqn.2016-06.io.spdk:cnode31371", 00:18:04.697 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:18:04.697 } 00:18:04.697 } 00:18:04.697 Got JSON-RPC error response 00:18:04.697 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:18:04.697 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:18:04.697 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode652 00:18:04.957 [2024-07-15 09:59:18.392504] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode652: invalid model number 'SPDK_Controller' 00:18:04.957 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='2024/07/15 09:59:18 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode652], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:18:04.957 request: 00:18:04.957 { 00:18:04.957 "method": "nvmf_create_subsystem", 00:18:04.957 "params": { 00:18:04.957 "nqn": "nqn.2016-06.io.spdk:cnode652", 00:18:04.957 "model_number": "SPDK_Controller\u001f" 00:18:04.957 } 00:18:04.957 } 00:18:04.957 Got JSON-RPC error response 00:18:04.957 GoRPCClient: error on JSON-RPC call' 00:18:04.957 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ 2024/07/15 09:59:18 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode652], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:18:04.957 request: 00:18:04.957 { 00:18:04.957 "method": "nvmf_create_subsystem", 00:18:04.957 "params": { 00:18:04.957 "nqn": "nqn.2016-06.io.spdk:cnode652", 00:18:04.957 "model_number": "SPDK_Controller\u001f" 00:18:04.957 } 00:18:04.957 } 00:18:04.957 Got JSON-RPC error response 00:18:04.957 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:18:04.957 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:18:04.957 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:18:04.957 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:18:04.957 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:18:04.957 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:18:04.957 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:18:04.957 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:04.957 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:18:04.957 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:18:04.957 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:18:04.957 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:04.957 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:04.957 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:18:04.957 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:18:04.957 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:18:04.957 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:04.957 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:04.957 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:18:04.957 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:18:04.957 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:18:04.957 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:04.957 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:04.957 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:18:04.957 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:18:04.957 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:18:04.957 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:04.957 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:04.957 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:18:04.957 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:18:04.957 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:18:04.957 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:04.957 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:04.957 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:18:04.957 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:18:04.957 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:18:04.957 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:04.957 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:04.957 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:18:04.957 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:18:04.957 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:18:04.957 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:04.957 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:04.957 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:18:04.957 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:18:04.958 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:18:04.958 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:04.958 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:04.958 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:18:04.958 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:18:04.958 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:18:04.958 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:04.958 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:04.958 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:18:04.958 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:18:04.958 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:18:04.958 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:04.958 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:04.958 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:18:04.958 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:18:04.958 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:18:04.958 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:04.958 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:04.958 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:18:04.958 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:18:04.958 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:18:04.958 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:04.958 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:04.958 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:18:04.958 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:18:04.958 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:18:04.958 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:04.958 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:04.958 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:18:04.958 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:18:04.958 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:18:04.958 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:04.958 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:04.958 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:18:04.958 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:18:04.958 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:18:04.958 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:04.958 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:04.958 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:18:04.958 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:18:04.958 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:18:04.958 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:04.958 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:04.958 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:18:04.958 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:18:04.958 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:18:04.958 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:04.958 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:04.958 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:18:04.958 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:18:04.958 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:18:04.958 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:04.958 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.281 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:18:05.281 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:18:05.281 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:18:05.281 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.281 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.281 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:18:05.281 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:18:05.281 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:18:05.281 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.281 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.281 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:18:05.281 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:18:05.281 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:18:05.281 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:05.281 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:05.281 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ m == \- ]] 00:18:05.281 09:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'm,A+v*QxFf+n;"On;"On;"On;"On;"On;\"O\u007fn;"On;"On;\"O\u007f /dev/null' 00:18:08.152 09:59:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:08.152 09:59:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:08.152 00:18:08.152 real 0m5.403s 00:18:08.152 user 0m20.815s 00:18:08.152 sys 0m1.399s 00:18:08.152 09:59:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:08.152 09:59:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:08.152 ************************************ 00:18:08.152 END TEST nvmf_invalid 00:18:08.152 ************************************ 00:18:08.152 09:59:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:08.152 09:59:21 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:18:08.152 09:59:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:08.152 09:59:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:08.152 09:59:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:08.152 ************************************ 00:18:08.152 START TEST nvmf_abort 00:18:08.152 ************************************ 00:18:08.152 09:59:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:18:08.444 * Looking for test storage... 00:18:08.444 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:08.444 09:59:21 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:08.444 09:59:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:18:08.444 09:59:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:08.444 09:59:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:08.444 09:59:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:08.444 09:59:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:08.444 09:59:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:08.444 09:59:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:08.444 09:59:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:08.444 09:59:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:08.444 09:59:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:08.444 09:59:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:08.444 09:59:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:18:08.444 09:59:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:18:08.444 09:59:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:08.444 09:59:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:08.444 09:59:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:08.444 09:59:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:08.444 09:59:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:08.444 09:59:21 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:08.444 09:59:21 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:08.444 09:59:21 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:08.444 09:59:21 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.444 09:59:21 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.444 09:59:21 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.444 09:59:21 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:18:08.444 09:59:21 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.444 09:59:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:18:08.444 09:59:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:08.444 09:59:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:08.444 09:59:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:08.444 09:59:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:08.444 09:59:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:08.444 09:59:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:08.444 09:59:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:08.444 09:59:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:08.444 09:59:21 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:08.444 09:59:21 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:18:08.444 09:59:21 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:18:08.444 09:59:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:08.444 09:59:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:08.444 09:59:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:08.444 09:59:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:08.444 09:59:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:08.444 09:59:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:08.444 09:59:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:08.444 09:59:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:08.444 09:59:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:08.444 09:59:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:08.444 09:59:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:08.444 09:59:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:08.444 09:59:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:08.444 09:59:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:08.444 09:59:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:08.444 09:59:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:08.444 09:59:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:08.445 09:59:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:08.445 09:59:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:08.445 09:59:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:08.445 09:59:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:08.445 09:59:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:08.445 09:59:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:08.445 09:59:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:08.445 09:59:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:08.445 09:59:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:08.445 09:59:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:08.445 09:59:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:08.445 Cannot find device "nvmf_tgt_br" 00:18:08.445 09:59:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@155 -- # true 00:18:08.445 09:59:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:08.445 Cannot find device "nvmf_tgt_br2" 00:18:08.445 09:59:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@156 -- # true 00:18:08.445 09:59:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:08.445 09:59:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:08.445 Cannot find device "nvmf_tgt_br" 00:18:08.445 09:59:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@158 -- # true 00:18:08.445 09:59:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:08.445 Cannot find device "nvmf_tgt_br2" 00:18:08.445 09:59:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@159 -- # true 00:18:08.445 09:59:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:08.445 09:59:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:08.445 09:59:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:08.445 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:08.445 09:59:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@162 -- # true 00:18:08.445 09:59:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:08.445 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:08.445 09:59:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@163 -- # true 00:18:08.445 09:59:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:08.445 09:59:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:08.704 09:59:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:08.704 09:59:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:08.704 09:59:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:08.704 09:59:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:08.704 09:59:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:08.704 09:59:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:08.705 09:59:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:08.705 09:59:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:08.705 09:59:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:08.705 09:59:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:08.705 09:59:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:08.705 09:59:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:08.705 09:59:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:08.705 09:59:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:08.705 09:59:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:08.705 09:59:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:08.705 09:59:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:08.705 09:59:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:08.705 09:59:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:08.705 09:59:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:08.705 09:59:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:08.705 09:59:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:08.705 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:08.705 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:18:08.705 00:18:08.705 --- 10.0.0.2 ping statistics --- 00:18:08.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:08.705 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:18:08.705 09:59:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:08.705 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:08.705 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:18:08.705 00:18:08.705 --- 10.0.0.3 ping statistics --- 00:18:08.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:08.705 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:18:08.705 09:59:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:08.705 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:08.705 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:18:08.705 00:18:08.705 --- 10.0.0.1 ping statistics --- 00:18:08.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:08.705 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:18:08.705 09:59:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:08.705 09:59:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@433 -- # return 0 00:18:08.705 09:59:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:08.705 09:59:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:08.705 09:59:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:08.705 09:59:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:08.705 09:59:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:08.705 09:59:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:08.705 09:59:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:08.705 09:59:22 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:18:08.705 09:59:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:08.705 09:59:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:08.705 09:59:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:18:08.705 09:59:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=68406 00:18:08.705 09:59:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:08.705 09:59:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 68406 00:18:08.705 09:59:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 68406 ']' 00:18:08.705 09:59:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:08.705 09:59:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:08.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:08.705 09:59:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:08.705 09:59:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:08.705 09:59:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:18:08.705 [2024-07-15 09:59:22.256159] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:18:08.705 [2024-07-15 09:59:22.256224] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:08.965 [2024-07-15 09:59:22.395338] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:08.965 [2024-07-15 09:59:22.500721] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:08.965 [2024-07-15 09:59:22.500761] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:08.965 [2024-07-15 09:59:22.500768] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:08.965 [2024-07-15 09:59:22.500773] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:08.965 [2024-07-15 09:59:22.500777] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:08.965 [2024-07-15 09:59:22.500914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:08.965 [2024-07-15 09:59:22.501114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:08.965 [2024-07-15 09:59:22.501118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:09.904 09:59:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:09.904 09:59:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:18:09.904 09:59:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:09.904 09:59:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:09.904 09:59:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:18:09.904 09:59:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:09.904 09:59:23 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:18:09.904 09:59:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.904 09:59:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:18:09.904 [2024-07-15 09:59:23.218444] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:09.904 09:59:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.904 09:59:23 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:18:09.904 09:59:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.904 09:59:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:18:09.904 Malloc0 00:18:09.904 09:59:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.904 09:59:23 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:09.904 09:59:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.904 09:59:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:18:09.904 Delay0 00:18:09.904 09:59:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.904 09:59:23 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:09.904 09:59:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.904 09:59:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:18:09.904 09:59:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.904 09:59:23 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:18:09.904 09:59:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.904 09:59:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:18:09.904 09:59:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.904 09:59:23 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:09.904 09:59:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.904 09:59:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:18:09.904 [2024-07-15 09:59:23.299423] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:09.904 09:59:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.904 09:59:23 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:09.904 09:59:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.904 09:59:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:18:09.904 09:59:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.904 09:59:23 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:18:10.163 [2024-07-15 09:59:23.496774] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:12.064 Initializing NVMe Controllers 00:18:12.064 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:18:12.064 controller IO queue size 128 less than required 00:18:12.064 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:18:12.064 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:18:12.064 Initialization complete. Launching workers. 00:18:12.064 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 42571 00:18:12.064 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 42632, failed to submit 62 00:18:12.064 success 42575, unsuccess 57, failed 0 00:18:12.064 09:59:25 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:12.064 09:59:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.064 09:59:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:18:12.064 09:59:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.064 09:59:25 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:18:12.064 09:59:25 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:18:12.064 09:59:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:12.064 09:59:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:18:12.064 09:59:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:12.064 09:59:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:18:12.064 09:59:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:12.064 09:59:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:12.064 rmmod nvme_tcp 00:18:12.064 rmmod nvme_fabrics 00:18:12.064 rmmod nvme_keyring 00:18:12.064 09:59:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:12.064 09:59:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:18:12.064 09:59:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:18:12.064 09:59:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 68406 ']' 00:18:12.064 09:59:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 68406 00:18:12.064 09:59:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 68406 ']' 00:18:12.064 09:59:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 68406 00:18:12.064 09:59:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:18:12.064 09:59:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:12.064 09:59:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68406 00:18:12.322 09:59:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:12.322 09:59:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:12.322 killing process with pid 68406 00:18:12.322 09:59:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68406' 00:18:12.322 09:59:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 68406 00:18:12.322 09:59:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 68406 00:18:12.322 09:59:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:12.322 09:59:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:12.322 09:59:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:12.322 09:59:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:12.322 09:59:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:12.322 09:59:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:12.322 09:59:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:12.322 09:59:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:12.650 09:59:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:12.650 00:18:12.650 real 0m4.218s 00:18:12.650 user 0m12.235s 00:18:12.650 sys 0m0.889s 00:18:12.650 09:59:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:12.650 09:59:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:18:12.650 ************************************ 00:18:12.650 END TEST nvmf_abort 00:18:12.650 ************************************ 00:18:12.650 09:59:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:12.650 09:59:25 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:18:12.650 09:59:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:12.650 09:59:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:12.650 09:59:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:12.650 ************************************ 00:18:12.650 START TEST nvmf_ns_hotplug_stress 00:18:12.650 ************************************ 00:18:12.650 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:18:12.650 * Looking for test storage... 00:18:12.650 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:12.650 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:12.650 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:18:12.650 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:12.650 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:12.650 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:12.650 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:12.650 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:12.650 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:12.650 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:12.650 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:12.650 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:12.650 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:12.650 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:18:12.650 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:18:12.650 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:12.650 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:12.650 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:12.650 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:12.650 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:12.650 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:12.650 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:12.650 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:12.650 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.650 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.650 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.650 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:18:12.650 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.650 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:18:12.650 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:12.650 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:12.650 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:12.650 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:12.650 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:12.650 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:12.650 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:12.650 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:12.650 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:12.650 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:18:12.650 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:12.650 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:12.650 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:12.650 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:12.650 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:12.650 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:12.650 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:12.650 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:12.650 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:12.650 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:12.650 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:12.650 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:12.650 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:12.650 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:12.650 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:12.650 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:12.650 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:12.650 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:12.650 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:12.650 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:12.650 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:12.650 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:12.650 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:12.650 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:12.651 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:12.651 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:12.651 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:12.651 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:12.651 Cannot find device "nvmf_tgt_br" 00:18:12.651 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # true 00:18:12.651 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:12.651 Cannot find device "nvmf_tgt_br2" 00:18:12.651 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # true 00:18:12.651 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:12.651 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:12.907 Cannot find device "nvmf_tgt_br" 00:18:12.907 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # true 00:18:12.907 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:12.907 Cannot find device "nvmf_tgt_br2" 00:18:12.907 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # true 00:18:12.907 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:12.907 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:12.907 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:12.907 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:12.907 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:18:12.907 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:12.907 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:12.907 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:18:12.907 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:12.907 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:12.907 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:12.907 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:12.907 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:12.907 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:12.907 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:12.908 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:12.908 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:12.908 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:12.908 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:12.908 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:12.908 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:12.908 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:12.908 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:12.908 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:12.908 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:12.908 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:12.908 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:12.908 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:12.908 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:12.908 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:12.908 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:12.908 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:12.908 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:12.908 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:18:12.908 00:18:12.908 --- 10.0.0.2 ping statistics --- 00:18:12.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:12.908 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:18:12.908 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:12.908 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:12.908 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.145 ms 00:18:12.908 00:18:12.908 --- 10.0.0.3 ping statistics --- 00:18:12.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:12.908 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:18:12.908 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:12.908 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:12.908 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:18:12.908 00:18:12.908 --- 10.0.0.1 ping statistics --- 00:18:12.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:12.908 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:18:12.908 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:12.908 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@433 -- # return 0 00:18:12.908 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:12.908 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:12.908 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:12.908 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:13.165 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:13.165 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:13.165 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:13.165 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:18:13.165 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:13.165 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:13.165 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:18:13.165 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=68673 00:18:13.165 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:13.165 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 68673 00:18:13.165 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 68673 ']' 00:18:13.165 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:13.165 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:13.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:13.165 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:13.165 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:13.165 09:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:18:13.165 [2024-07-15 09:59:26.580527] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:18:13.165 [2024-07-15 09:59:26.580592] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:13.165 [2024-07-15 09:59:26.720433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:13.423 [2024-07-15 09:59:26.829412] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:13.423 [2024-07-15 09:59:26.829463] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:13.423 [2024-07-15 09:59:26.829474] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:13.423 [2024-07-15 09:59:26.829482] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:13.423 [2024-07-15 09:59:26.829490] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:13.423 [2024-07-15 09:59:26.829617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:13.423 [2024-07-15 09:59:26.830121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:13.423 [2024-07-15 09:59:26.830127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:13.987 09:59:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:13.987 09:59:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:18:13.987 09:59:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:13.987 09:59:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:13.987 09:59:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:18:13.987 09:59:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:13.987 09:59:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:18:13.987 09:59:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:14.246 [2024-07-15 09:59:27.708314] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:14.246 09:59:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:14.504 09:59:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:14.763 [2024-07-15 09:59:28.148912] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:14.763 09:59:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:15.021 09:59:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:18:15.021 Malloc0 00:18:15.278 09:59:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:15.278 Delay0 00:18:15.278 09:59:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:15.536 09:59:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:18:15.795 NULL1 00:18:15.795 09:59:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:18:16.054 09:59:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:18:16.054 09:59:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=68806 00:18:16.054 09:59:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68806 00:18:16.054 09:59:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:16.054 Read completed with error (sct=0, sc=11) 00:18:16.054 09:59:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:16.054 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:18:16.312 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:18:16.312 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:18:16.312 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:18:16.312 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:18:16.312 09:59:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:18:16.312 09:59:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:18:16.571 true 00:18:16.571 09:59:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68806 00:18:16.571 09:59:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:17.505 09:59:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:17.506 09:59:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:18:17.506 09:59:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:18:17.763 true 00:18:17.763 09:59:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68806 00:18:17.763 09:59:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:18.021 09:59:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:18.280 09:59:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:18:18.280 09:59:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:18:18.280 true 00:18:18.280 09:59:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68806 00:18:18.280 09:59:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:19.660 09:59:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:19.660 09:59:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:18:19.660 09:59:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:18:19.660 true 00:18:19.919 09:59:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68806 00:18:19.919 09:59:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:19.919 09:59:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:20.178 09:59:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:18:20.178 09:59:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:18:20.438 true 00:18:20.438 09:59:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68806 00:18:20.438 09:59:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:21.374 09:59:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:21.633 09:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:18:21.633 09:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:18:21.892 true 00:18:21.892 09:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68806 00:18:21.892 09:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:22.151 09:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:22.151 09:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:18:22.151 09:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:18:22.411 true 00:18:22.411 09:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68806 00:18:22.411 09:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:23.347 09:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:23.604 09:59:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:18:23.604 09:59:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:18:23.863 true 00:18:23.863 09:59:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68806 00:18:23.863 09:59:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:24.123 09:59:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:24.382 09:59:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:18:24.382 09:59:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:18:24.382 true 00:18:24.382 09:59:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68806 00:18:24.382 09:59:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:25.317 09:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:25.576 09:59:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:18:25.576 09:59:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:18:25.834 true 00:18:25.834 09:59:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68806 00:18:25.834 09:59:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:26.092 09:59:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:26.352 09:59:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:18:26.352 09:59:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:18:26.352 true 00:18:26.352 09:59:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68806 00:18:26.352 09:59:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:27.317 09:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:27.576 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:18:27.576 09:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:18:27.576 09:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:18:27.835 true 00:18:27.835 09:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68806 00:18:27.835 09:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:28.094 09:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:28.354 09:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:18:28.354 09:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:18:28.354 true 00:18:28.354 09:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68806 00:18:28.354 09:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:29.729 09:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:29.729 09:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:18:29.729 09:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:18:29.729 true 00:18:29.729 09:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68806 00:18:29.729 09:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:29.986 09:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:30.243 09:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:18:30.243 09:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:18:30.502 true 00:18:30.502 09:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68806 00:18:30.502 09:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:31.439 09:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:31.704 09:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:18:31.705 09:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:18:31.976 true 00:18:31.976 09:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68806 00:18:31.976 09:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:32.235 09:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:32.235 09:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:18:32.235 09:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:18:32.493 true 00:18:32.493 09:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68806 00:18:32.493 09:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:33.430 09:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:33.689 09:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:18:33.689 09:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:18:33.947 true 00:18:33.947 09:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68806 00:18:33.947 09:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:34.205 09:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:34.205 09:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:18:34.205 09:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:18:34.464 true 00:18:34.464 09:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68806 00:18:34.464 09:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:35.401 09:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:35.660 09:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:18:35.660 09:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:18:35.928 true 00:18:35.928 09:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68806 00:18:35.928 09:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:36.200 09:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:36.459 09:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:18:36.459 09:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:18:36.718 true 00:18:36.718 09:59:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68806 00:18:36.718 09:59:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:37.656 09:59:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:37.656 09:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:18:37.656 09:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:18:37.915 true 00:18:37.915 09:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68806 00:18:37.915 09:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:38.174 09:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:38.432 09:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:18:38.432 09:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:18:38.432 true 00:18:38.432 09:59:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68806 00:18:38.432 09:59:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:39.367 09:59:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:39.627 09:59:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:18:39.627 09:59:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:18:39.885 true 00:18:39.885 09:59:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68806 00:18:39.885 09:59:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:40.144 09:59:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:40.403 09:59:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:18:40.403 09:59:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:18:40.662 true 00:18:40.662 09:59:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68806 00:18:40.662 09:59:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:41.599 09:59:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:41.599 09:59:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:18:41.599 09:59:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:18:41.858 true 00:18:41.858 09:59:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68806 00:18:41.858 09:59:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:42.118 09:59:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:42.376 09:59:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:18:42.376 09:59:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:18:42.634 true 00:18:42.634 09:59:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68806 00:18:42.634 09:59:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:43.571 09:59:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:43.831 09:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:18:43.831 09:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:18:43.831 true 00:18:43.831 09:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68806 00:18:43.831 09:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:44.089 09:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:44.348 09:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:18:44.348 09:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:18:44.673 true 00:18:44.673 09:59:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68806 00:18:44.673 09:59:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:45.610 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:18:45.610 09:59:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:45.869 09:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:18:45.869 09:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:18:45.869 true 00:18:45.869 09:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68806 00:18:45.869 09:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:46.127 Initializing NVMe Controllers 00:18:46.127 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:46.127 Controller IO queue size 128, less than required. 00:18:46.127 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:46.127 Controller IO queue size 128, less than required. 00:18:46.127 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:46.127 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:46.127 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:46.127 Initialization complete. Launching workers. 00:18:46.127 ======================================================== 00:18:46.127 Latency(us) 00:18:46.127 Device Information : IOPS MiB/s Average min max 00:18:46.127 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 303.73 0.15 227617.42 1908.98 1039412.77 00:18:46.127 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 12669.66 6.19 10102.69 3201.55 521158.72 00:18:46.127 ======================================================== 00:18:46.127 Total : 12973.39 6.33 15195.08 1908.98 1039412.77 00:18:46.127 00:18:46.128 09:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:46.386 09:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:18:46.386 09:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:18:46.645 true 00:18:46.645 10:00:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68806 00:18:46.645 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (68806) - No such process 00:18:46.645 10:00:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 68806 00:18:46.645 10:00:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:46.915 10:00:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:47.171 10:00:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:18:47.171 10:00:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:18:47.171 10:00:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:18:47.171 10:00:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:18:47.171 10:00:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:18:47.428 null0 00:18:47.428 10:00:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:18:47.428 10:00:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:18:47.428 10:00:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:18:47.428 null1 00:18:47.428 10:00:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:18:47.428 10:00:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:18:47.428 10:00:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:18:47.687 null2 00:18:47.687 10:00:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:18:47.687 10:00:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:18:47.687 10:00:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:18:47.946 null3 00:18:47.946 10:00:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:18:47.946 10:00:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:18:47.946 10:00:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:18:48.205 null4 00:18:48.205 10:00:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:18:48.205 10:00:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:18:48.205 10:00:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:18:48.464 null5 00:18:48.464 10:00:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:18:48.464 10:00:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:18:48.464 10:00:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:18:48.724 null6 00:18:48.724 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:18:48.724 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:18:48.724 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:18:48.981 null7 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 69875 69877 69879 69880 69882 69885 69886 69890 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:48.981 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:18:49.239 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:49.239 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:18:49.239 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:49.239 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:18:49.239 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:49.239 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:18:49.239 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:18:49.239 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:18:49.239 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:49.239 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:49.239 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:18:49.496 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:49.496 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:49.496 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:18:49.496 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:49.496 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:49.496 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:18:49.496 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:49.496 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:49.496 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:18:49.496 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:49.496 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:49.496 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:18:49.496 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:49.496 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:49.496 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:18:49.496 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:49.496 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:49.496 10:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:18:49.496 10:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:49.497 10:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:18:49.753 10:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:49.753 10:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:49.753 10:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:18:49.753 10:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:49.753 10:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:18:49.753 10:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:18:49.753 10:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:49.753 10:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:18:49.753 10:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:49.753 10:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:49.753 10:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:18:49.753 10:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:18:50.013 10:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:50.013 10:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:50.013 10:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:18:50.013 10:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:50.013 10:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:50.013 10:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:18:50.013 10:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:50.013 10:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:50.013 10:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:18:50.013 10:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:50.013 10:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:50.013 10:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:18:50.013 10:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:50.013 10:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:50.013 10:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:18:50.013 10:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:50.013 10:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:50.013 10:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:18:50.272 10:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:50.272 10:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:50.272 10:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:18:50.272 10:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:18:50.273 10:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:50.273 10:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:18:50.273 10:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:50.273 10:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:18:50.273 10:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:18:50.273 10:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:18:50.528 10:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:50.528 10:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:50.528 10:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:18:50.528 10:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:50.528 10:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:50.528 10:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:18:50.528 10:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:50.528 10:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:50.528 10:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:18:50.528 10:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:50.528 10:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:50.529 10:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:18:50.529 10:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:50.529 10:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:50.529 10:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:50.529 10:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:18:50.529 10:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:50.529 10:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:50.529 10:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:18:50.529 10:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:50.529 10:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:50.529 10:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:18:50.785 10:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:50.785 10:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:18:50.785 10:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:18:50.785 10:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:50.785 10:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:50.785 10:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:18:50.785 10:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:50.785 10:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:18:51.043 10:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:18:51.043 10:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:51.043 10:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:51.043 10:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:18:51.043 10:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:18:51.043 10:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:51.043 10:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:51.043 10:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:18:51.043 10:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:51.043 10:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:51.043 10:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:51.043 10:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:18:51.043 10:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:51.043 10:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:51.043 10:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:18:51.300 10:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:51.300 10:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:51.300 10:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:18:51.300 10:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:51.300 10:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:18:51.300 10:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:51.300 10:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:51.300 10:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:18:51.300 10:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:51.300 10:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:51.300 10:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:18:51.300 10:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:51.300 10:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:51.300 10:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:18:51.556 10:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:51.556 10:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:51.556 10:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:18:51.556 10:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:18:51.556 10:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:18:51.556 10:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:51.556 10:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:51.556 10:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:18:51.556 10:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:51.556 10:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:18:51.556 10:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:18:51.556 10:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:51.813 10:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:51.813 10:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:18:51.813 10:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:51.813 10:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:51.813 10:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:18:51.813 10:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:51.813 10:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:51.813 10:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:18:51.813 10:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:51.813 10:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:51.813 10:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:18:51.813 10:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:51.813 10:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:51.813 10:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:18:51.813 10:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:51.813 10:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:51.813 10:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:18:52.070 10:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:52.070 10:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:52.070 10:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:18:52.070 10:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:52.070 10:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:52.070 10:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:18:52.070 10:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:52.070 10:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:18:52.070 10:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:18:52.070 10:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:52.070 10:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:52.070 10:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:18:52.070 10:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:18:52.070 10:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:18:52.326 10:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:52.326 10:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:52.326 10:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:52.326 10:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:18:52.326 10:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:52.326 10:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:52.326 10:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:18:52.326 10:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:52.326 10:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:52.326 10:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:18:52.326 10:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:52.326 10:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:18:52.326 10:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:52.326 10:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:52.326 10:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:18:52.584 10:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:52.584 10:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:52.584 10:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:18:52.584 10:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:52.584 10:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:52.584 10:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:18:52.584 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:18:52.584 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:52.584 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:52.584 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:52.584 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:18:52.584 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:18:52.584 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:18:52.584 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:18:52.584 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:52.584 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:52.584 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:18:52.841 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:52.841 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:52.841 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:52.841 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:18:52.841 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:52.841 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:52.841 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:18:52.841 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:18:52.841 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:52.841 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:52.841 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:18:52.841 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:52.841 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:52.841 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:18:52.841 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:52.841 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:52.841 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:52.841 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:18:53.099 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:53.099 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:53.099 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:18:53.099 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:18:53.099 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:53.099 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:53.099 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:18:53.099 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:53.099 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:18:53.099 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:18:53.099 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:53.099 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:53.099 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:18:53.357 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:18:53.357 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:53.357 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:53.357 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:53.357 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:18:53.357 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:18:53.357 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:53.357 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:53.357 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:18:53.357 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:53.357 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:53.357 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:18:53.357 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:53.357 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:53.357 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:18:53.357 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:53.357 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:53.357 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:53.357 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:18:53.614 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:53.614 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:53.614 10:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:18:53.614 10:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:53.614 10:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:53.614 10:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:18:53.614 10:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:18:53.614 10:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:18:53.614 10:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:18:53.614 10:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:53.614 10:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:18:53.871 10:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:18:53.872 10:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:53.872 10:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:53.872 10:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:53.872 10:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:18:53.872 10:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:53.872 10:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:53.872 10:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:18:53.872 10:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:53.872 10:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:53.872 10:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:18:53.872 10:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:53.872 10:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:53.872 10:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:18:53.872 10:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:53.872 10:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:53.872 10:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:18:53.872 10:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:53.872 10:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:53.872 10:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:18:53.872 10:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:53.872 10:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:54.129 10:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:18:54.129 10:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:54.129 10:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:54.129 10:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:18:54.129 10:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:54.129 10:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:18:54.129 10:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:54.129 10:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:18:54.129 10:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:54.129 10:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:54.388 10:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:54.388 10:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:54.388 10:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:54.388 10:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:54.388 10:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:18:54.388 10:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:54.388 10:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:54.388 10:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:54.388 10:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:54.388 10:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:54.388 10:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:54.646 10:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:54.646 10:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:18:54.646 10:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:18:54.646 10:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:18:54.646 10:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:18:54.646 10:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:54.646 10:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:18:54.646 10:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:54.646 10:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:18:54.646 10:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:54.646 10:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:54.646 rmmod nvme_tcp 00:18:54.961 rmmod nvme_fabrics 00:18:54.961 rmmod nvme_keyring 00:18:54.961 10:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:54.961 10:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:18:54.961 10:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:18:54.961 10:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 68673 ']' 00:18:54.961 10:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 68673 00:18:54.961 10:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 68673 ']' 00:18:54.961 10:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 68673 00:18:54.961 10:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:18:54.961 10:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:54.961 10:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68673 00:18:54.961 10:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:54.961 killing process with pid 68673 00:18:54.961 10:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:54.961 10:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68673' 00:18:54.961 10:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 68673 00:18:54.961 10:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 68673 00:18:55.276 10:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:55.276 10:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:55.276 10:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:55.276 10:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:55.276 10:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:55.276 10:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:55.276 10:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:55.276 10:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:55.276 10:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:55.276 00:18:55.276 real 0m42.595s 00:18:55.276 user 3m24.098s 00:18:55.276 sys 0m11.117s 00:18:55.276 10:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:55.276 10:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:18:55.276 ************************************ 00:18:55.276 END TEST nvmf_ns_hotplug_stress 00:18:55.276 ************************************ 00:18:55.276 10:00:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:55.276 10:00:08 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:18:55.276 10:00:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:55.276 10:00:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:55.276 10:00:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:55.276 ************************************ 00:18:55.276 START TEST nvmf_connect_stress 00:18:55.276 ************************************ 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:18:55.276 * Looking for test storage... 00:18:55.276 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:55.276 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:55.544 Cannot find device "nvmf_tgt_br" 00:18:55.544 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@155 -- # true 00:18:55.544 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:55.544 Cannot find device "nvmf_tgt_br2" 00:18:55.544 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@156 -- # true 00:18:55.544 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:55.544 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:55.544 Cannot find device "nvmf_tgt_br" 00:18:55.544 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@158 -- # true 00:18:55.544 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:55.544 Cannot find device "nvmf_tgt_br2" 00:18:55.544 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@159 -- # true 00:18:55.544 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:55.544 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:55.544 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:55.544 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:55.544 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@162 -- # true 00:18:55.544 10:00:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:55.544 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:55.544 10:00:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@163 -- # true 00:18:55.544 10:00:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:55.544 10:00:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:55.544 10:00:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:55.544 10:00:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:55.544 10:00:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:55.544 10:00:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:55.544 10:00:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:55.544 10:00:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:55.544 10:00:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:55.544 10:00:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:55.544 10:00:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:55.544 10:00:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:55.544 10:00:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:55.544 10:00:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:55.544 10:00:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:55.544 10:00:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:55.544 10:00:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:55.544 10:00:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:55.544 10:00:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:55.544 10:00:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:55.802 10:00:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:55.802 10:00:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:55.802 10:00:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:55.802 10:00:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:55.802 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:55.802 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:18:55.802 00:18:55.802 --- 10.0.0.2 ping statistics --- 00:18:55.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:55.802 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:18:55.802 10:00:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:55.802 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:55.802 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:18:55.802 00:18:55.802 --- 10.0.0.3 ping statistics --- 00:18:55.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:55.802 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:18:55.802 10:00:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:55.802 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:55.802 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:18:55.802 00:18:55.802 --- 10.0.0.1 ping statistics --- 00:18:55.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:55.802 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:18:55.802 10:00:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:55.802 10:00:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@433 -- # return 0 00:18:55.802 10:00:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:55.802 10:00:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:55.802 10:00:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:55.802 10:00:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:55.802 10:00:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:55.802 10:00:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:55.802 10:00:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:55.802 10:00:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:18:55.802 10:00:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:55.803 10:00:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:55.803 10:00:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:55.803 10:00:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=71224 00:18:55.803 10:00:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:55.803 10:00:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 71224 00:18:55.803 10:00:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 71224 ']' 00:18:55.803 10:00:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:55.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:55.803 10:00:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:55.803 10:00:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:55.803 10:00:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:55.803 10:00:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:55.803 [2024-07-15 10:00:09.271188] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:18:55.803 [2024-07-15 10:00:09.271260] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:56.060 [2024-07-15 10:00:09.409170] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:56.060 [2024-07-15 10:00:09.514199] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:56.060 [2024-07-15 10:00:09.514252] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:56.060 [2024-07-15 10:00:09.514258] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:56.060 [2024-07-15 10:00:09.514263] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:56.060 [2024-07-15 10:00:09.514267] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:56.060 [2024-07-15 10:00:09.514463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:56.060 [2024-07-15 10:00:09.515670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:56.060 [2024-07-15 10:00:09.515683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:56.625 10:00:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:56.625 10:00:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:18:56.625 10:00:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:56.625 10:00:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:56.625 10:00:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:56.884 [2024-07-15 10:00:10.232024] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:56.884 [2024-07-15 10:00:10.257126] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:56.884 NULL1 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=71276 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71276 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.884 10:00:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:57.142 10:00:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.142 10:00:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71276 00:18:57.142 10:00:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:57.142 10:00:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.142 10:00:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:57.707 10:00:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.707 10:00:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71276 00:18:57.707 10:00:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:57.707 10:00:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.707 10:00:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:57.965 10:00:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.965 10:00:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71276 00:18:57.965 10:00:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:57.965 10:00:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.965 10:00:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:58.223 10:00:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.223 10:00:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71276 00:18:58.223 10:00:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:58.223 10:00:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.223 10:00:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:58.481 10:00:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.481 10:00:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71276 00:18:58.481 10:00:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:58.481 10:00:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.481 10:00:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:59.048 10:00:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.048 10:00:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71276 00:18:59.048 10:00:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:59.048 10:00:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.048 10:00:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:59.306 10:00:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.306 10:00:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71276 00:18:59.306 10:00:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:59.306 10:00:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.306 10:00:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:59.567 10:00:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.567 10:00:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71276 00:18:59.567 10:00:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:59.567 10:00:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.567 10:00:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:59.828 10:00:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.828 10:00:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71276 00:18:59.828 10:00:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:59.828 10:00:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.828 10:00:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:00.087 10:00:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.087 10:00:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71276 00:19:00.087 10:00:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:00.087 10:00:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.087 10:00:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:00.654 10:00:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.654 10:00:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71276 00:19:00.654 10:00:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:00.654 10:00:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.654 10:00:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:00.915 10:00:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.915 10:00:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71276 00:19:00.915 10:00:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:00.915 10:00:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.915 10:00:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:01.172 10:00:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.172 10:00:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71276 00:19:01.172 10:00:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:01.172 10:00:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.172 10:00:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:01.430 10:00:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.430 10:00:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71276 00:19:01.430 10:00:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:01.430 10:00:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.430 10:00:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:01.688 10:00:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.688 10:00:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71276 00:19:01.688 10:00:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:01.688 10:00:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.688 10:00:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:02.255 10:00:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.255 10:00:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71276 00:19:02.255 10:00:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:02.255 10:00:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.255 10:00:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:02.520 10:00:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.521 10:00:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71276 00:19:02.521 10:00:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:02.521 10:00:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.521 10:00:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:02.779 10:00:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.779 10:00:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71276 00:19:02.779 10:00:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:02.779 10:00:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.779 10:00:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:03.041 10:00:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.041 10:00:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71276 00:19:03.041 10:00:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:03.041 10:00:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.041 10:00:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:03.611 10:00:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.611 10:00:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71276 00:19:03.611 10:00:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:03.611 10:00:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.611 10:00:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:03.869 10:00:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.869 10:00:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71276 00:19:03.869 10:00:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:03.869 10:00:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.869 10:00:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:04.128 10:00:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.128 10:00:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71276 00:19:04.128 10:00:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:04.128 10:00:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.128 10:00:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:04.388 10:00:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.388 10:00:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71276 00:19:04.388 10:00:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:04.388 10:00:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.388 10:00:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:04.646 10:00:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.646 10:00:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71276 00:19:04.646 10:00:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:04.646 10:00:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.646 10:00:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:05.214 10:00:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.214 10:00:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71276 00:19:05.214 10:00:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:05.214 10:00:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.214 10:00:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:05.473 10:00:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.473 10:00:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71276 00:19:05.473 10:00:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:05.473 10:00:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.473 10:00:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:05.731 10:00:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.731 10:00:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71276 00:19:05.731 10:00:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:05.731 10:00:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.731 10:00:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:05.989 10:00:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.989 10:00:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71276 00:19:05.989 10:00:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:05.989 10:00:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.989 10:00:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:06.247 10:00:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.247 10:00:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71276 00:19:06.247 10:00:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:06.247 10:00:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.247 10:00:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:06.815 10:00:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.815 10:00:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71276 00:19:06.815 10:00:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:06.815 10:00:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.815 10:00:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:07.076 10:00:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.076 10:00:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71276 00:19:07.076 10:00:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:07.076 10:00:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.076 10:00:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:07.076 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:07.336 10:00:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.336 10:00:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71276 00:19:07.336 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (71276) - No such process 00:19:07.336 10:00:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 71276 00:19:07.336 10:00:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:19:07.336 10:00:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:19:07.336 10:00:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:19:07.336 10:00:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:07.336 10:00:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:19:07.336 10:00:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:07.336 10:00:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:19:07.336 10:00:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:07.336 10:00:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:07.336 rmmod nvme_tcp 00:19:07.336 rmmod nvme_fabrics 00:19:07.336 rmmod nvme_keyring 00:19:07.336 10:00:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:07.336 10:00:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:19:07.336 10:00:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:19:07.336 10:00:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 71224 ']' 00:19:07.336 10:00:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 71224 00:19:07.336 10:00:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 71224 ']' 00:19:07.336 10:00:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 71224 00:19:07.336 10:00:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:19:07.336 10:00:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:07.336 10:00:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71224 00:19:07.596 killing process with pid 71224 00:19:07.596 10:00:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:07.596 10:00:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:07.596 10:00:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71224' 00:19:07.596 10:00:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 71224 00:19:07.596 10:00:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 71224 00:19:07.596 10:00:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:07.596 10:00:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:07.596 10:00:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:07.596 10:00:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:07.596 10:00:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:07.596 10:00:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:07.596 10:00:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:07.596 10:00:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:07.596 10:00:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:07.596 00:19:07.596 real 0m12.512s 00:19:07.596 user 0m42.063s 00:19:07.596 sys 0m2.811s 00:19:07.596 10:00:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:07.596 10:00:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:07.596 ************************************ 00:19:07.596 END TEST nvmf_connect_stress 00:19:07.596 ************************************ 00:19:07.856 10:00:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:07.856 10:00:21 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:19:07.856 10:00:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:07.856 10:00:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:07.856 10:00:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:07.856 ************************************ 00:19:07.856 START TEST nvmf_fused_ordering 00:19:07.856 ************************************ 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:19:07.856 * Looking for test storage... 00:19:07.856 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:07.856 Cannot find device "nvmf_tgt_br" 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@155 -- # true 00:19:07.856 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:08.116 Cannot find device "nvmf_tgt_br2" 00:19:08.116 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@156 -- # true 00:19:08.116 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:08.116 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:08.116 Cannot find device "nvmf_tgt_br" 00:19:08.116 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@158 -- # true 00:19:08.116 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:08.116 Cannot find device "nvmf_tgt_br2" 00:19:08.116 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@159 -- # true 00:19:08.116 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:08.116 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:08.116 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:08.116 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:08.116 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@162 -- # true 00:19:08.116 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:08.116 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:08.116 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@163 -- # true 00:19:08.116 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:08.116 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:08.116 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:08.116 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:08.116 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:08.116 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:08.116 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:08.116 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:08.116 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:08.116 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:08.116 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:08.116 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:08.116 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:08.116 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:08.116 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:08.116 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:08.116 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:08.116 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:08.116 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:08.116 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:08.116 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:08.116 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:08.116 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:08.116 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:08.116 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:08.116 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:19:08.116 00:19:08.116 --- 10.0.0.2 ping statistics --- 00:19:08.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:08.116 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:19:08.116 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:08.116 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:08.116 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:19:08.116 00:19:08.116 --- 10.0.0.3 ping statistics --- 00:19:08.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:08.116 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:19:08.116 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:08.116 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:08.116 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:19:08.116 00:19:08.116 --- 10.0.0.1 ping statistics --- 00:19:08.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:08.116 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:19:08.116 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:08.116 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@433 -- # return 0 00:19:08.116 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:08.116 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:08.116 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:08.116 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:08.116 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:08.116 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:08.116 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:08.376 10:00:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:19:08.376 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:08.376 10:00:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:08.376 10:00:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:08.376 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=71589 00:19:08.376 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:08.376 10:00:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 71589 00:19:08.376 10:00:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 71589 ']' 00:19:08.376 10:00:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:08.376 10:00:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:08.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:08.376 10:00:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:08.376 10:00:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:08.376 10:00:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:08.376 [2024-07-15 10:00:21.763455] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:19:08.376 [2024-07-15 10:00:21.763529] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:08.376 [2024-07-15 10:00:21.891131] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.635 [2024-07-15 10:00:22.000075] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:08.635 [2024-07-15 10:00:22.000143] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:08.635 [2024-07-15 10:00:22.000151] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:08.635 [2024-07-15 10:00:22.000157] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:08.635 [2024-07-15 10:00:22.000162] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:08.635 [2024-07-15 10:00:22.000190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:09.203 10:00:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:09.203 10:00:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:19:09.203 10:00:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:09.203 10:00:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:09.203 10:00:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:09.203 10:00:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:09.203 10:00:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:09.203 10:00:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.203 10:00:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:09.203 [2024-07-15 10:00:22.707468] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:09.203 10:00:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.203 10:00:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:19:09.203 10:00:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.203 10:00:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:09.203 10:00:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.203 10:00:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:09.203 10:00:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.203 10:00:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:09.203 [2024-07-15 10:00:22.731528] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:09.203 10:00:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.203 10:00:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:19:09.203 10:00:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.203 10:00:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:09.203 NULL1 00:19:09.203 10:00:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.203 10:00:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:19:09.203 10:00:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.203 10:00:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:09.203 10:00:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.203 10:00:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:19:09.203 10:00:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.203 10:00:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:09.203 10:00:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.203 10:00:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:09.462 [2024-07-15 10:00:22.801493] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:19:09.462 [2024-07-15 10:00:22.801552] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71648 ] 00:19:09.720 Attached to nqn.2016-06.io.spdk:cnode1 00:19:09.720 Namespace ID: 1 size: 1GB 00:19:09.720 fused_ordering(0) 00:19:09.720 fused_ordering(1) 00:19:09.720 fused_ordering(2) 00:19:09.720 fused_ordering(3) 00:19:09.720 fused_ordering(4) 00:19:09.720 fused_ordering(5) 00:19:09.720 fused_ordering(6) 00:19:09.720 fused_ordering(7) 00:19:09.720 fused_ordering(8) 00:19:09.720 fused_ordering(9) 00:19:09.720 fused_ordering(10) 00:19:09.720 fused_ordering(11) 00:19:09.720 fused_ordering(12) 00:19:09.720 fused_ordering(13) 00:19:09.720 fused_ordering(14) 00:19:09.720 fused_ordering(15) 00:19:09.720 fused_ordering(16) 00:19:09.720 fused_ordering(17) 00:19:09.720 fused_ordering(18) 00:19:09.720 fused_ordering(19) 00:19:09.720 fused_ordering(20) 00:19:09.720 fused_ordering(21) 00:19:09.720 fused_ordering(22) 00:19:09.720 fused_ordering(23) 00:19:09.720 fused_ordering(24) 00:19:09.720 fused_ordering(25) 00:19:09.720 fused_ordering(26) 00:19:09.720 fused_ordering(27) 00:19:09.720 fused_ordering(28) 00:19:09.720 fused_ordering(29) 00:19:09.720 fused_ordering(30) 00:19:09.720 fused_ordering(31) 00:19:09.720 fused_ordering(32) 00:19:09.720 fused_ordering(33) 00:19:09.720 fused_ordering(34) 00:19:09.720 fused_ordering(35) 00:19:09.720 fused_ordering(36) 00:19:09.720 fused_ordering(37) 00:19:09.720 fused_ordering(38) 00:19:09.720 fused_ordering(39) 00:19:09.720 fused_ordering(40) 00:19:09.720 fused_ordering(41) 00:19:09.720 fused_ordering(42) 00:19:09.720 fused_ordering(43) 00:19:09.720 fused_ordering(44) 00:19:09.720 fused_ordering(45) 00:19:09.720 fused_ordering(46) 00:19:09.720 fused_ordering(47) 00:19:09.720 fused_ordering(48) 00:19:09.720 fused_ordering(49) 00:19:09.720 fused_ordering(50) 00:19:09.720 fused_ordering(51) 00:19:09.720 fused_ordering(52) 00:19:09.720 fused_ordering(53) 00:19:09.720 fused_ordering(54) 00:19:09.720 fused_ordering(55) 00:19:09.720 fused_ordering(56) 00:19:09.720 fused_ordering(57) 00:19:09.720 fused_ordering(58) 00:19:09.720 fused_ordering(59) 00:19:09.720 fused_ordering(60) 00:19:09.720 fused_ordering(61) 00:19:09.720 fused_ordering(62) 00:19:09.720 fused_ordering(63) 00:19:09.720 fused_ordering(64) 00:19:09.720 fused_ordering(65) 00:19:09.720 fused_ordering(66) 00:19:09.720 fused_ordering(67) 00:19:09.720 fused_ordering(68) 00:19:09.720 fused_ordering(69) 00:19:09.720 fused_ordering(70) 00:19:09.720 fused_ordering(71) 00:19:09.720 fused_ordering(72) 00:19:09.720 fused_ordering(73) 00:19:09.720 fused_ordering(74) 00:19:09.720 fused_ordering(75) 00:19:09.720 fused_ordering(76) 00:19:09.720 fused_ordering(77) 00:19:09.720 fused_ordering(78) 00:19:09.720 fused_ordering(79) 00:19:09.720 fused_ordering(80) 00:19:09.720 fused_ordering(81) 00:19:09.720 fused_ordering(82) 00:19:09.720 fused_ordering(83) 00:19:09.720 fused_ordering(84) 00:19:09.720 fused_ordering(85) 00:19:09.720 fused_ordering(86) 00:19:09.720 fused_ordering(87) 00:19:09.720 fused_ordering(88) 00:19:09.720 fused_ordering(89) 00:19:09.720 fused_ordering(90) 00:19:09.720 fused_ordering(91) 00:19:09.720 fused_ordering(92) 00:19:09.720 fused_ordering(93) 00:19:09.720 fused_ordering(94) 00:19:09.720 fused_ordering(95) 00:19:09.720 fused_ordering(96) 00:19:09.720 fused_ordering(97) 00:19:09.720 fused_ordering(98) 00:19:09.720 fused_ordering(99) 00:19:09.720 fused_ordering(100) 00:19:09.720 fused_ordering(101) 00:19:09.720 fused_ordering(102) 00:19:09.720 fused_ordering(103) 00:19:09.720 fused_ordering(104) 00:19:09.720 fused_ordering(105) 00:19:09.720 fused_ordering(106) 00:19:09.720 fused_ordering(107) 00:19:09.720 fused_ordering(108) 00:19:09.720 fused_ordering(109) 00:19:09.720 fused_ordering(110) 00:19:09.720 fused_ordering(111) 00:19:09.720 fused_ordering(112) 00:19:09.720 fused_ordering(113) 00:19:09.720 fused_ordering(114) 00:19:09.720 fused_ordering(115) 00:19:09.720 fused_ordering(116) 00:19:09.720 fused_ordering(117) 00:19:09.720 fused_ordering(118) 00:19:09.720 fused_ordering(119) 00:19:09.720 fused_ordering(120) 00:19:09.720 fused_ordering(121) 00:19:09.720 fused_ordering(122) 00:19:09.720 fused_ordering(123) 00:19:09.720 fused_ordering(124) 00:19:09.720 fused_ordering(125) 00:19:09.720 fused_ordering(126) 00:19:09.720 fused_ordering(127) 00:19:09.720 fused_ordering(128) 00:19:09.720 fused_ordering(129) 00:19:09.720 fused_ordering(130) 00:19:09.720 fused_ordering(131) 00:19:09.720 fused_ordering(132) 00:19:09.720 fused_ordering(133) 00:19:09.720 fused_ordering(134) 00:19:09.720 fused_ordering(135) 00:19:09.720 fused_ordering(136) 00:19:09.720 fused_ordering(137) 00:19:09.720 fused_ordering(138) 00:19:09.720 fused_ordering(139) 00:19:09.720 fused_ordering(140) 00:19:09.720 fused_ordering(141) 00:19:09.720 fused_ordering(142) 00:19:09.720 fused_ordering(143) 00:19:09.720 fused_ordering(144) 00:19:09.720 fused_ordering(145) 00:19:09.720 fused_ordering(146) 00:19:09.720 fused_ordering(147) 00:19:09.720 fused_ordering(148) 00:19:09.720 fused_ordering(149) 00:19:09.720 fused_ordering(150) 00:19:09.720 fused_ordering(151) 00:19:09.720 fused_ordering(152) 00:19:09.720 fused_ordering(153) 00:19:09.720 fused_ordering(154) 00:19:09.720 fused_ordering(155) 00:19:09.720 fused_ordering(156) 00:19:09.720 fused_ordering(157) 00:19:09.720 fused_ordering(158) 00:19:09.720 fused_ordering(159) 00:19:09.720 fused_ordering(160) 00:19:09.720 fused_ordering(161) 00:19:09.720 fused_ordering(162) 00:19:09.720 fused_ordering(163) 00:19:09.720 fused_ordering(164) 00:19:09.720 fused_ordering(165) 00:19:09.720 fused_ordering(166) 00:19:09.720 fused_ordering(167) 00:19:09.720 fused_ordering(168) 00:19:09.720 fused_ordering(169) 00:19:09.720 fused_ordering(170) 00:19:09.720 fused_ordering(171) 00:19:09.720 fused_ordering(172) 00:19:09.720 fused_ordering(173) 00:19:09.720 fused_ordering(174) 00:19:09.720 fused_ordering(175) 00:19:09.720 fused_ordering(176) 00:19:09.720 fused_ordering(177) 00:19:09.720 fused_ordering(178) 00:19:09.720 fused_ordering(179) 00:19:09.720 fused_ordering(180) 00:19:09.720 fused_ordering(181) 00:19:09.720 fused_ordering(182) 00:19:09.720 fused_ordering(183) 00:19:09.720 fused_ordering(184) 00:19:09.720 fused_ordering(185) 00:19:09.720 fused_ordering(186) 00:19:09.720 fused_ordering(187) 00:19:09.720 fused_ordering(188) 00:19:09.720 fused_ordering(189) 00:19:09.720 fused_ordering(190) 00:19:09.720 fused_ordering(191) 00:19:09.720 fused_ordering(192) 00:19:09.720 fused_ordering(193) 00:19:09.720 fused_ordering(194) 00:19:09.720 fused_ordering(195) 00:19:09.720 fused_ordering(196) 00:19:09.720 fused_ordering(197) 00:19:09.720 fused_ordering(198) 00:19:09.720 fused_ordering(199) 00:19:09.720 fused_ordering(200) 00:19:09.720 fused_ordering(201) 00:19:09.720 fused_ordering(202) 00:19:09.720 fused_ordering(203) 00:19:09.720 fused_ordering(204) 00:19:09.720 fused_ordering(205) 00:19:09.978 fused_ordering(206) 00:19:09.978 fused_ordering(207) 00:19:09.978 fused_ordering(208) 00:19:09.978 fused_ordering(209) 00:19:09.978 fused_ordering(210) 00:19:09.978 fused_ordering(211) 00:19:09.978 fused_ordering(212) 00:19:09.978 fused_ordering(213) 00:19:09.978 fused_ordering(214) 00:19:09.978 fused_ordering(215) 00:19:09.978 fused_ordering(216) 00:19:09.978 fused_ordering(217) 00:19:09.978 fused_ordering(218) 00:19:09.978 fused_ordering(219) 00:19:09.978 fused_ordering(220) 00:19:09.978 fused_ordering(221) 00:19:09.978 fused_ordering(222) 00:19:09.978 fused_ordering(223) 00:19:09.978 fused_ordering(224) 00:19:09.978 fused_ordering(225) 00:19:09.978 fused_ordering(226) 00:19:09.978 fused_ordering(227) 00:19:09.978 fused_ordering(228) 00:19:09.978 fused_ordering(229) 00:19:09.978 fused_ordering(230) 00:19:09.978 fused_ordering(231) 00:19:09.978 fused_ordering(232) 00:19:09.978 fused_ordering(233) 00:19:09.978 fused_ordering(234) 00:19:09.978 fused_ordering(235) 00:19:09.978 fused_ordering(236) 00:19:09.978 fused_ordering(237) 00:19:09.978 fused_ordering(238) 00:19:09.978 fused_ordering(239) 00:19:09.978 fused_ordering(240) 00:19:09.978 fused_ordering(241) 00:19:09.978 fused_ordering(242) 00:19:09.978 fused_ordering(243) 00:19:09.978 fused_ordering(244) 00:19:09.978 fused_ordering(245) 00:19:09.978 fused_ordering(246) 00:19:09.978 fused_ordering(247) 00:19:09.978 fused_ordering(248) 00:19:09.978 fused_ordering(249) 00:19:09.978 fused_ordering(250) 00:19:09.978 fused_ordering(251) 00:19:09.978 fused_ordering(252) 00:19:09.978 fused_ordering(253) 00:19:09.978 fused_ordering(254) 00:19:09.978 fused_ordering(255) 00:19:09.978 fused_ordering(256) 00:19:09.978 fused_ordering(257) 00:19:09.978 fused_ordering(258) 00:19:09.978 fused_ordering(259) 00:19:09.978 fused_ordering(260) 00:19:09.978 fused_ordering(261) 00:19:09.978 fused_ordering(262) 00:19:09.978 fused_ordering(263) 00:19:09.978 fused_ordering(264) 00:19:09.978 fused_ordering(265) 00:19:09.978 fused_ordering(266) 00:19:09.978 fused_ordering(267) 00:19:09.978 fused_ordering(268) 00:19:09.978 fused_ordering(269) 00:19:09.978 fused_ordering(270) 00:19:09.978 fused_ordering(271) 00:19:09.978 fused_ordering(272) 00:19:09.978 fused_ordering(273) 00:19:09.978 fused_ordering(274) 00:19:09.978 fused_ordering(275) 00:19:09.978 fused_ordering(276) 00:19:09.978 fused_ordering(277) 00:19:09.978 fused_ordering(278) 00:19:09.978 fused_ordering(279) 00:19:09.978 fused_ordering(280) 00:19:09.978 fused_ordering(281) 00:19:09.978 fused_ordering(282) 00:19:09.978 fused_ordering(283) 00:19:09.978 fused_ordering(284) 00:19:09.978 fused_ordering(285) 00:19:09.978 fused_ordering(286) 00:19:09.978 fused_ordering(287) 00:19:09.978 fused_ordering(288) 00:19:09.978 fused_ordering(289) 00:19:09.978 fused_ordering(290) 00:19:09.978 fused_ordering(291) 00:19:09.978 fused_ordering(292) 00:19:09.978 fused_ordering(293) 00:19:09.978 fused_ordering(294) 00:19:09.978 fused_ordering(295) 00:19:09.978 fused_ordering(296) 00:19:09.978 fused_ordering(297) 00:19:09.978 fused_ordering(298) 00:19:09.978 fused_ordering(299) 00:19:09.978 fused_ordering(300) 00:19:09.978 fused_ordering(301) 00:19:09.978 fused_ordering(302) 00:19:09.978 fused_ordering(303) 00:19:09.978 fused_ordering(304) 00:19:09.978 fused_ordering(305) 00:19:09.978 fused_ordering(306) 00:19:09.978 fused_ordering(307) 00:19:09.978 fused_ordering(308) 00:19:09.978 fused_ordering(309) 00:19:09.978 fused_ordering(310) 00:19:09.978 fused_ordering(311) 00:19:09.978 fused_ordering(312) 00:19:09.978 fused_ordering(313) 00:19:09.978 fused_ordering(314) 00:19:09.978 fused_ordering(315) 00:19:09.978 fused_ordering(316) 00:19:09.978 fused_ordering(317) 00:19:09.978 fused_ordering(318) 00:19:09.978 fused_ordering(319) 00:19:09.978 fused_ordering(320) 00:19:09.978 fused_ordering(321) 00:19:09.978 fused_ordering(322) 00:19:09.978 fused_ordering(323) 00:19:09.978 fused_ordering(324) 00:19:09.978 fused_ordering(325) 00:19:09.978 fused_ordering(326) 00:19:09.978 fused_ordering(327) 00:19:09.978 fused_ordering(328) 00:19:09.978 fused_ordering(329) 00:19:09.978 fused_ordering(330) 00:19:09.978 fused_ordering(331) 00:19:09.978 fused_ordering(332) 00:19:09.978 fused_ordering(333) 00:19:09.978 fused_ordering(334) 00:19:09.978 fused_ordering(335) 00:19:09.978 fused_ordering(336) 00:19:09.978 fused_ordering(337) 00:19:09.978 fused_ordering(338) 00:19:09.978 fused_ordering(339) 00:19:09.978 fused_ordering(340) 00:19:09.978 fused_ordering(341) 00:19:09.978 fused_ordering(342) 00:19:09.978 fused_ordering(343) 00:19:09.978 fused_ordering(344) 00:19:09.978 fused_ordering(345) 00:19:09.978 fused_ordering(346) 00:19:09.978 fused_ordering(347) 00:19:09.978 fused_ordering(348) 00:19:09.978 fused_ordering(349) 00:19:09.978 fused_ordering(350) 00:19:09.978 fused_ordering(351) 00:19:09.978 fused_ordering(352) 00:19:09.978 fused_ordering(353) 00:19:09.978 fused_ordering(354) 00:19:09.978 fused_ordering(355) 00:19:09.979 fused_ordering(356) 00:19:09.979 fused_ordering(357) 00:19:09.979 fused_ordering(358) 00:19:09.979 fused_ordering(359) 00:19:09.979 fused_ordering(360) 00:19:09.979 fused_ordering(361) 00:19:09.979 fused_ordering(362) 00:19:09.979 fused_ordering(363) 00:19:09.979 fused_ordering(364) 00:19:09.979 fused_ordering(365) 00:19:09.979 fused_ordering(366) 00:19:09.979 fused_ordering(367) 00:19:09.979 fused_ordering(368) 00:19:09.979 fused_ordering(369) 00:19:09.979 fused_ordering(370) 00:19:09.979 fused_ordering(371) 00:19:09.979 fused_ordering(372) 00:19:09.979 fused_ordering(373) 00:19:09.979 fused_ordering(374) 00:19:09.979 fused_ordering(375) 00:19:09.979 fused_ordering(376) 00:19:09.979 fused_ordering(377) 00:19:09.979 fused_ordering(378) 00:19:09.979 fused_ordering(379) 00:19:09.979 fused_ordering(380) 00:19:09.979 fused_ordering(381) 00:19:09.979 fused_ordering(382) 00:19:09.979 fused_ordering(383) 00:19:09.979 fused_ordering(384) 00:19:09.979 fused_ordering(385) 00:19:09.979 fused_ordering(386) 00:19:09.979 fused_ordering(387) 00:19:09.979 fused_ordering(388) 00:19:09.979 fused_ordering(389) 00:19:09.979 fused_ordering(390) 00:19:09.979 fused_ordering(391) 00:19:09.979 fused_ordering(392) 00:19:09.979 fused_ordering(393) 00:19:09.979 fused_ordering(394) 00:19:09.979 fused_ordering(395) 00:19:09.979 fused_ordering(396) 00:19:09.979 fused_ordering(397) 00:19:09.979 fused_ordering(398) 00:19:09.979 fused_ordering(399) 00:19:09.979 fused_ordering(400) 00:19:09.979 fused_ordering(401) 00:19:09.979 fused_ordering(402) 00:19:09.979 fused_ordering(403) 00:19:09.979 fused_ordering(404) 00:19:09.979 fused_ordering(405) 00:19:09.979 fused_ordering(406) 00:19:09.979 fused_ordering(407) 00:19:09.979 fused_ordering(408) 00:19:09.979 fused_ordering(409) 00:19:09.979 fused_ordering(410) 00:19:10.237 fused_ordering(411) 00:19:10.237 fused_ordering(412) 00:19:10.237 fused_ordering(413) 00:19:10.237 fused_ordering(414) 00:19:10.237 fused_ordering(415) 00:19:10.237 fused_ordering(416) 00:19:10.237 fused_ordering(417) 00:19:10.237 fused_ordering(418) 00:19:10.237 fused_ordering(419) 00:19:10.237 fused_ordering(420) 00:19:10.237 fused_ordering(421) 00:19:10.237 fused_ordering(422) 00:19:10.237 fused_ordering(423) 00:19:10.237 fused_ordering(424) 00:19:10.237 fused_ordering(425) 00:19:10.237 fused_ordering(426) 00:19:10.237 fused_ordering(427) 00:19:10.237 fused_ordering(428) 00:19:10.237 fused_ordering(429) 00:19:10.237 fused_ordering(430) 00:19:10.237 fused_ordering(431) 00:19:10.237 fused_ordering(432) 00:19:10.237 fused_ordering(433) 00:19:10.237 fused_ordering(434) 00:19:10.237 fused_ordering(435) 00:19:10.237 fused_ordering(436) 00:19:10.237 fused_ordering(437) 00:19:10.237 fused_ordering(438) 00:19:10.237 fused_ordering(439) 00:19:10.237 fused_ordering(440) 00:19:10.237 fused_ordering(441) 00:19:10.237 fused_ordering(442) 00:19:10.237 fused_ordering(443) 00:19:10.237 fused_ordering(444) 00:19:10.237 fused_ordering(445) 00:19:10.237 fused_ordering(446) 00:19:10.237 fused_ordering(447) 00:19:10.237 fused_ordering(448) 00:19:10.237 fused_ordering(449) 00:19:10.237 fused_ordering(450) 00:19:10.237 fused_ordering(451) 00:19:10.237 fused_ordering(452) 00:19:10.237 fused_ordering(453) 00:19:10.237 fused_ordering(454) 00:19:10.237 fused_ordering(455) 00:19:10.237 fused_ordering(456) 00:19:10.237 fused_ordering(457) 00:19:10.237 fused_ordering(458) 00:19:10.237 fused_ordering(459) 00:19:10.237 fused_ordering(460) 00:19:10.237 fused_ordering(461) 00:19:10.237 fused_ordering(462) 00:19:10.237 fused_ordering(463) 00:19:10.237 fused_ordering(464) 00:19:10.237 fused_ordering(465) 00:19:10.237 fused_ordering(466) 00:19:10.237 fused_ordering(467) 00:19:10.237 fused_ordering(468) 00:19:10.237 fused_ordering(469) 00:19:10.237 fused_ordering(470) 00:19:10.237 fused_ordering(471) 00:19:10.237 fused_ordering(472) 00:19:10.237 fused_ordering(473) 00:19:10.237 fused_ordering(474) 00:19:10.237 fused_ordering(475) 00:19:10.237 fused_ordering(476) 00:19:10.238 fused_ordering(477) 00:19:10.238 fused_ordering(478) 00:19:10.238 fused_ordering(479) 00:19:10.238 fused_ordering(480) 00:19:10.238 fused_ordering(481) 00:19:10.238 fused_ordering(482) 00:19:10.238 fused_ordering(483) 00:19:10.238 fused_ordering(484) 00:19:10.238 fused_ordering(485) 00:19:10.238 fused_ordering(486) 00:19:10.238 fused_ordering(487) 00:19:10.238 fused_ordering(488) 00:19:10.238 fused_ordering(489) 00:19:10.238 fused_ordering(490) 00:19:10.238 fused_ordering(491) 00:19:10.238 fused_ordering(492) 00:19:10.238 fused_ordering(493) 00:19:10.238 fused_ordering(494) 00:19:10.238 fused_ordering(495) 00:19:10.238 fused_ordering(496) 00:19:10.238 fused_ordering(497) 00:19:10.238 fused_ordering(498) 00:19:10.238 fused_ordering(499) 00:19:10.238 fused_ordering(500) 00:19:10.238 fused_ordering(501) 00:19:10.238 fused_ordering(502) 00:19:10.238 fused_ordering(503) 00:19:10.238 fused_ordering(504) 00:19:10.238 fused_ordering(505) 00:19:10.238 fused_ordering(506) 00:19:10.238 fused_ordering(507) 00:19:10.238 fused_ordering(508) 00:19:10.238 fused_ordering(509) 00:19:10.238 fused_ordering(510) 00:19:10.238 fused_ordering(511) 00:19:10.238 fused_ordering(512) 00:19:10.238 fused_ordering(513) 00:19:10.238 fused_ordering(514) 00:19:10.238 fused_ordering(515) 00:19:10.238 fused_ordering(516) 00:19:10.238 fused_ordering(517) 00:19:10.238 fused_ordering(518) 00:19:10.238 fused_ordering(519) 00:19:10.238 fused_ordering(520) 00:19:10.238 fused_ordering(521) 00:19:10.238 fused_ordering(522) 00:19:10.238 fused_ordering(523) 00:19:10.238 fused_ordering(524) 00:19:10.238 fused_ordering(525) 00:19:10.238 fused_ordering(526) 00:19:10.238 fused_ordering(527) 00:19:10.238 fused_ordering(528) 00:19:10.238 fused_ordering(529) 00:19:10.238 fused_ordering(530) 00:19:10.238 fused_ordering(531) 00:19:10.238 fused_ordering(532) 00:19:10.238 fused_ordering(533) 00:19:10.238 fused_ordering(534) 00:19:10.238 fused_ordering(535) 00:19:10.238 fused_ordering(536) 00:19:10.238 fused_ordering(537) 00:19:10.238 fused_ordering(538) 00:19:10.238 fused_ordering(539) 00:19:10.238 fused_ordering(540) 00:19:10.238 fused_ordering(541) 00:19:10.238 fused_ordering(542) 00:19:10.238 fused_ordering(543) 00:19:10.238 fused_ordering(544) 00:19:10.238 fused_ordering(545) 00:19:10.238 fused_ordering(546) 00:19:10.238 fused_ordering(547) 00:19:10.238 fused_ordering(548) 00:19:10.238 fused_ordering(549) 00:19:10.238 fused_ordering(550) 00:19:10.238 fused_ordering(551) 00:19:10.238 fused_ordering(552) 00:19:10.238 fused_ordering(553) 00:19:10.238 fused_ordering(554) 00:19:10.238 fused_ordering(555) 00:19:10.238 fused_ordering(556) 00:19:10.238 fused_ordering(557) 00:19:10.238 fused_ordering(558) 00:19:10.238 fused_ordering(559) 00:19:10.238 fused_ordering(560) 00:19:10.238 fused_ordering(561) 00:19:10.238 fused_ordering(562) 00:19:10.238 fused_ordering(563) 00:19:10.238 fused_ordering(564) 00:19:10.238 fused_ordering(565) 00:19:10.238 fused_ordering(566) 00:19:10.238 fused_ordering(567) 00:19:10.238 fused_ordering(568) 00:19:10.238 fused_ordering(569) 00:19:10.238 fused_ordering(570) 00:19:10.238 fused_ordering(571) 00:19:10.238 fused_ordering(572) 00:19:10.238 fused_ordering(573) 00:19:10.238 fused_ordering(574) 00:19:10.238 fused_ordering(575) 00:19:10.238 fused_ordering(576) 00:19:10.238 fused_ordering(577) 00:19:10.238 fused_ordering(578) 00:19:10.238 fused_ordering(579) 00:19:10.238 fused_ordering(580) 00:19:10.238 fused_ordering(581) 00:19:10.238 fused_ordering(582) 00:19:10.238 fused_ordering(583) 00:19:10.238 fused_ordering(584) 00:19:10.238 fused_ordering(585) 00:19:10.238 fused_ordering(586) 00:19:10.238 fused_ordering(587) 00:19:10.238 fused_ordering(588) 00:19:10.238 fused_ordering(589) 00:19:10.238 fused_ordering(590) 00:19:10.238 fused_ordering(591) 00:19:10.238 fused_ordering(592) 00:19:10.238 fused_ordering(593) 00:19:10.238 fused_ordering(594) 00:19:10.238 fused_ordering(595) 00:19:10.238 fused_ordering(596) 00:19:10.238 fused_ordering(597) 00:19:10.238 fused_ordering(598) 00:19:10.238 fused_ordering(599) 00:19:10.238 fused_ordering(600) 00:19:10.238 fused_ordering(601) 00:19:10.238 fused_ordering(602) 00:19:10.238 fused_ordering(603) 00:19:10.238 fused_ordering(604) 00:19:10.238 fused_ordering(605) 00:19:10.238 fused_ordering(606) 00:19:10.238 fused_ordering(607) 00:19:10.238 fused_ordering(608) 00:19:10.238 fused_ordering(609) 00:19:10.238 fused_ordering(610) 00:19:10.238 fused_ordering(611) 00:19:10.238 fused_ordering(612) 00:19:10.238 fused_ordering(613) 00:19:10.238 fused_ordering(614) 00:19:10.238 fused_ordering(615) 00:19:10.496 fused_ordering(616) 00:19:10.496 fused_ordering(617) 00:19:10.496 fused_ordering(618) 00:19:10.496 fused_ordering(619) 00:19:10.496 fused_ordering(620) 00:19:10.496 fused_ordering(621) 00:19:10.496 fused_ordering(622) 00:19:10.496 fused_ordering(623) 00:19:10.496 fused_ordering(624) 00:19:10.496 fused_ordering(625) 00:19:10.496 fused_ordering(626) 00:19:10.496 fused_ordering(627) 00:19:10.496 fused_ordering(628) 00:19:10.496 fused_ordering(629) 00:19:10.496 fused_ordering(630) 00:19:10.496 fused_ordering(631) 00:19:10.496 fused_ordering(632) 00:19:10.496 fused_ordering(633) 00:19:10.496 fused_ordering(634) 00:19:10.496 fused_ordering(635) 00:19:10.496 fused_ordering(636) 00:19:10.496 fused_ordering(637) 00:19:10.496 fused_ordering(638) 00:19:10.496 fused_ordering(639) 00:19:10.496 fused_ordering(640) 00:19:10.496 fused_ordering(641) 00:19:10.496 fused_ordering(642) 00:19:10.496 fused_ordering(643) 00:19:10.496 fused_ordering(644) 00:19:10.496 fused_ordering(645) 00:19:10.496 fused_ordering(646) 00:19:10.496 fused_ordering(647) 00:19:10.496 fused_ordering(648) 00:19:10.496 fused_ordering(649) 00:19:10.496 fused_ordering(650) 00:19:10.496 fused_ordering(651) 00:19:10.496 fused_ordering(652) 00:19:10.496 fused_ordering(653) 00:19:10.496 fused_ordering(654) 00:19:10.496 fused_ordering(655) 00:19:10.496 fused_ordering(656) 00:19:10.496 fused_ordering(657) 00:19:10.496 fused_ordering(658) 00:19:10.496 fused_ordering(659) 00:19:10.496 fused_ordering(660) 00:19:10.496 fused_ordering(661) 00:19:10.496 fused_ordering(662) 00:19:10.496 fused_ordering(663) 00:19:10.496 fused_ordering(664) 00:19:10.496 fused_ordering(665) 00:19:10.496 fused_ordering(666) 00:19:10.496 fused_ordering(667) 00:19:10.496 fused_ordering(668) 00:19:10.496 fused_ordering(669) 00:19:10.496 fused_ordering(670) 00:19:10.496 fused_ordering(671) 00:19:10.496 fused_ordering(672) 00:19:10.496 fused_ordering(673) 00:19:10.496 fused_ordering(674) 00:19:10.496 fused_ordering(675) 00:19:10.496 fused_ordering(676) 00:19:10.496 fused_ordering(677) 00:19:10.496 fused_ordering(678) 00:19:10.496 fused_ordering(679) 00:19:10.496 fused_ordering(680) 00:19:10.496 fused_ordering(681) 00:19:10.496 fused_ordering(682) 00:19:10.496 fused_ordering(683) 00:19:10.496 fused_ordering(684) 00:19:10.496 fused_ordering(685) 00:19:10.496 fused_ordering(686) 00:19:10.496 fused_ordering(687) 00:19:10.496 fused_ordering(688) 00:19:10.496 fused_ordering(689) 00:19:10.496 fused_ordering(690) 00:19:10.496 fused_ordering(691) 00:19:10.496 fused_ordering(692) 00:19:10.496 fused_ordering(693) 00:19:10.496 fused_ordering(694) 00:19:10.496 fused_ordering(695) 00:19:10.496 fused_ordering(696) 00:19:10.496 fused_ordering(697) 00:19:10.496 fused_ordering(698) 00:19:10.496 fused_ordering(699) 00:19:10.496 fused_ordering(700) 00:19:10.496 fused_ordering(701) 00:19:10.496 fused_ordering(702) 00:19:10.496 fused_ordering(703) 00:19:10.496 fused_ordering(704) 00:19:10.496 fused_ordering(705) 00:19:10.496 fused_ordering(706) 00:19:10.496 fused_ordering(707) 00:19:10.496 fused_ordering(708) 00:19:10.496 fused_ordering(709) 00:19:10.496 fused_ordering(710) 00:19:10.496 fused_ordering(711) 00:19:10.496 fused_ordering(712) 00:19:10.496 fused_ordering(713) 00:19:10.496 fused_ordering(714) 00:19:10.496 fused_ordering(715) 00:19:10.496 fused_ordering(716) 00:19:10.496 fused_ordering(717) 00:19:10.496 fused_ordering(718) 00:19:10.496 fused_ordering(719) 00:19:10.496 fused_ordering(720) 00:19:10.496 fused_ordering(721) 00:19:10.496 fused_ordering(722) 00:19:10.496 fused_ordering(723) 00:19:10.496 fused_ordering(724) 00:19:10.496 fused_ordering(725) 00:19:10.496 fused_ordering(726) 00:19:10.496 fused_ordering(727) 00:19:10.496 fused_ordering(728) 00:19:10.496 fused_ordering(729) 00:19:10.496 fused_ordering(730) 00:19:10.496 fused_ordering(731) 00:19:10.496 fused_ordering(732) 00:19:10.496 fused_ordering(733) 00:19:10.496 fused_ordering(734) 00:19:10.496 fused_ordering(735) 00:19:10.496 fused_ordering(736) 00:19:10.496 fused_ordering(737) 00:19:10.496 fused_ordering(738) 00:19:10.496 fused_ordering(739) 00:19:10.496 fused_ordering(740) 00:19:10.496 fused_ordering(741) 00:19:10.496 fused_ordering(742) 00:19:10.496 fused_ordering(743) 00:19:10.496 fused_ordering(744) 00:19:10.496 fused_ordering(745) 00:19:10.496 fused_ordering(746) 00:19:10.496 fused_ordering(747) 00:19:10.496 fused_ordering(748) 00:19:10.496 fused_ordering(749) 00:19:10.496 fused_ordering(750) 00:19:10.496 fused_ordering(751) 00:19:10.496 fused_ordering(752) 00:19:10.496 fused_ordering(753) 00:19:10.496 fused_ordering(754) 00:19:10.496 fused_ordering(755) 00:19:10.496 fused_ordering(756) 00:19:10.496 fused_ordering(757) 00:19:10.496 fused_ordering(758) 00:19:10.496 fused_ordering(759) 00:19:10.496 fused_ordering(760) 00:19:10.496 fused_ordering(761) 00:19:10.496 fused_ordering(762) 00:19:10.496 fused_ordering(763) 00:19:10.496 fused_ordering(764) 00:19:10.496 fused_ordering(765) 00:19:10.496 fused_ordering(766) 00:19:10.496 fused_ordering(767) 00:19:10.496 fused_ordering(768) 00:19:10.496 fused_ordering(769) 00:19:10.496 fused_ordering(770) 00:19:10.496 fused_ordering(771) 00:19:10.496 fused_ordering(772) 00:19:10.496 fused_ordering(773) 00:19:10.496 fused_ordering(774) 00:19:10.496 fused_ordering(775) 00:19:10.496 fused_ordering(776) 00:19:10.496 fused_ordering(777) 00:19:10.496 fused_ordering(778) 00:19:10.496 fused_ordering(779) 00:19:10.496 fused_ordering(780) 00:19:10.496 fused_ordering(781) 00:19:10.496 fused_ordering(782) 00:19:10.496 fused_ordering(783) 00:19:10.496 fused_ordering(784) 00:19:10.496 fused_ordering(785) 00:19:10.496 fused_ordering(786) 00:19:10.496 fused_ordering(787) 00:19:10.496 fused_ordering(788) 00:19:10.496 fused_ordering(789) 00:19:10.496 fused_ordering(790) 00:19:10.496 fused_ordering(791) 00:19:10.496 fused_ordering(792) 00:19:10.496 fused_ordering(793) 00:19:10.496 fused_ordering(794) 00:19:10.496 fused_ordering(795) 00:19:10.496 fused_ordering(796) 00:19:10.496 fused_ordering(797) 00:19:10.496 fused_ordering(798) 00:19:10.497 fused_ordering(799) 00:19:10.497 fused_ordering(800) 00:19:10.497 fused_ordering(801) 00:19:10.497 fused_ordering(802) 00:19:10.497 fused_ordering(803) 00:19:10.497 fused_ordering(804) 00:19:10.497 fused_ordering(805) 00:19:10.497 fused_ordering(806) 00:19:10.497 fused_ordering(807) 00:19:10.497 fused_ordering(808) 00:19:10.497 fused_ordering(809) 00:19:10.497 fused_ordering(810) 00:19:10.497 fused_ordering(811) 00:19:10.497 fused_ordering(812) 00:19:10.497 fused_ordering(813) 00:19:10.497 fused_ordering(814) 00:19:10.497 fused_ordering(815) 00:19:10.497 fused_ordering(816) 00:19:10.497 fused_ordering(817) 00:19:10.497 fused_ordering(818) 00:19:10.497 fused_ordering(819) 00:19:10.497 fused_ordering(820) 00:19:11.061 fused_ordering(821) 00:19:11.061 fused_ordering(822) 00:19:11.061 fused_ordering(823) 00:19:11.061 fused_ordering(824) 00:19:11.061 fused_ordering(825) 00:19:11.061 fused_ordering(826) 00:19:11.061 fused_ordering(827) 00:19:11.061 fused_ordering(828) 00:19:11.061 fused_ordering(829) 00:19:11.061 fused_ordering(830) 00:19:11.061 fused_ordering(831) 00:19:11.061 fused_ordering(832) 00:19:11.061 fused_ordering(833) 00:19:11.061 fused_ordering(834) 00:19:11.061 fused_ordering(835) 00:19:11.061 fused_ordering(836) 00:19:11.061 fused_ordering(837) 00:19:11.061 fused_ordering(838) 00:19:11.061 fused_ordering(839) 00:19:11.061 fused_ordering(840) 00:19:11.061 fused_ordering(841) 00:19:11.061 fused_ordering(842) 00:19:11.061 fused_ordering(843) 00:19:11.061 fused_ordering(844) 00:19:11.061 fused_ordering(845) 00:19:11.061 fused_ordering(846) 00:19:11.061 fused_ordering(847) 00:19:11.061 fused_ordering(848) 00:19:11.061 fused_ordering(849) 00:19:11.061 fused_ordering(850) 00:19:11.061 fused_ordering(851) 00:19:11.061 fused_ordering(852) 00:19:11.061 fused_ordering(853) 00:19:11.061 fused_ordering(854) 00:19:11.061 fused_ordering(855) 00:19:11.061 fused_ordering(856) 00:19:11.061 fused_ordering(857) 00:19:11.061 fused_ordering(858) 00:19:11.061 fused_ordering(859) 00:19:11.061 fused_ordering(860) 00:19:11.061 fused_ordering(861) 00:19:11.061 fused_ordering(862) 00:19:11.061 fused_ordering(863) 00:19:11.061 fused_ordering(864) 00:19:11.061 fused_ordering(865) 00:19:11.061 fused_ordering(866) 00:19:11.061 fused_ordering(867) 00:19:11.061 fused_ordering(868) 00:19:11.061 fused_ordering(869) 00:19:11.061 fused_ordering(870) 00:19:11.061 fused_ordering(871) 00:19:11.061 fused_ordering(872) 00:19:11.061 fused_ordering(873) 00:19:11.061 fused_ordering(874) 00:19:11.061 fused_ordering(875) 00:19:11.061 fused_ordering(876) 00:19:11.061 fused_ordering(877) 00:19:11.061 fused_ordering(878) 00:19:11.061 fused_ordering(879) 00:19:11.061 fused_ordering(880) 00:19:11.061 fused_ordering(881) 00:19:11.061 fused_ordering(882) 00:19:11.061 fused_ordering(883) 00:19:11.061 fused_ordering(884) 00:19:11.061 fused_ordering(885) 00:19:11.061 fused_ordering(886) 00:19:11.061 fused_ordering(887) 00:19:11.061 fused_ordering(888) 00:19:11.061 fused_ordering(889) 00:19:11.061 fused_ordering(890) 00:19:11.061 fused_ordering(891) 00:19:11.061 fused_ordering(892) 00:19:11.061 fused_ordering(893) 00:19:11.061 fused_ordering(894) 00:19:11.061 fused_ordering(895) 00:19:11.061 fused_ordering(896) 00:19:11.061 fused_ordering(897) 00:19:11.061 fused_ordering(898) 00:19:11.061 fused_ordering(899) 00:19:11.061 fused_ordering(900) 00:19:11.061 fused_ordering(901) 00:19:11.061 fused_ordering(902) 00:19:11.061 fused_ordering(903) 00:19:11.061 fused_ordering(904) 00:19:11.061 fused_ordering(905) 00:19:11.061 fused_ordering(906) 00:19:11.061 fused_ordering(907) 00:19:11.061 fused_ordering(908) 00:19:11.061 fused_ordering(909) 00:19:11.061 fused_ordering(910) 00:19:11.061 fused_ordering(911) 00:19:11.061 fused_ordering(912) 00:19:11.061 fused_ordering(913) 00:19:11.061 fused_ordering(914) 00:19:11.061 fused_ordering(915) 00:19:11.061 fused_ordering(916) 00:19:11.061 fused_ordering(917) 00:19:11.061 fused_ordering(918) 00:19:11.061 fused_ordering(919) 00:19:11.061 fused_ordering(920) 00:19:11.061 fused_ordering(921) 00:19:11.061 fused_ordering(922) 00:19:11.061 fused_ordering(923) 00:19:11.061 fused_ordering(924) 00:19:11.061 fused_ordering(925) 00:19:11.061 fused_ordering(926) 00:19:11.061 fused_ordering(927) 00:19:11.061 fused_ordering(928) 00:19:11.061 fused_ordering(929) 00:19:11.061 fused_ordering(930) 00:19:11.061 fused_ordering(931) 00:19:11.061 fused_ordering(932) 00:19:11.061 fused_ordering(933) 00:19:11.061 fused_ordering(934) 00:19:11.061 fused_ordering(935) 00:19:11.061 fused_ordering(936) 00:19:11.061 fused_ordering(937) 00:19:11.061 fused_ordering(938) 00:19:11.061 fused_ordering(939) 00:19:11.061 fused_ordering(940) 00:19:11.061 fused_ordering(941) 00:19:11.061 fused_ordering(942) 00:19:11.061 fused_ordering(943) 00:19:11.061 fused_ordering(944) 00:19:11.061 fused_ordering(945) 00:19:11.061 fused_ordering(946) 00:19:11.061 fused_ordering(947) 00:19:11.061 fused_ordering(948) 00:19:11.061 fused_ordering(949) 00:19:11.061 fused_ordering(950) 00:19:11.061 fused_ordering(951) 00:19:11.061 fused_ordering(952) 00:19:11.061 fused_ordering(953) 00:19:11.061 fused_ordering(954) 00:19:11.061 fused_ordering(955) 00:19:11.061 fused_ordering(956) 00:19:11.061 fused_ordering(957) 00:19:11.061 fused_ordering(958) 00:19:11.061 fused_ordering(959) 00:19:11.061 fused_ordering(960) 00:19:11.061 fused_ordering(961) 00:19:11.062 fused_ordering(962) 00:19:11.062 fused_ordering(963) 00:19:11.062 fused_ordering(964) 00:19:11.062 fused_ordering(965) 00:19:11.062 fused_ordering(966) 00:19:11.062 fused_ordering(967) 00:19:11.062 fused_ordering(968) 00:19:11.062 fused_ordering(969) 00:19:11.062 fused_ordering(970) 00:19:11.062 fused_ordering(971) 00:19:11.062 fused_ordering(972) 00:19:11.062 fused_ordering(973) 00:19:11.062 fused_ordering(974) 00:19:11.062 fused_ordering(975) 00:19:11.062 fused_ordering(976) 00:19:11.062 fused_ordering(977) 00:19:11.062 fused_ordering(978) 00:19:11.062 fused_ordering(979) 00:19:11.062 fused_ordering(980) 00:19:11.062 fused_ordering(981) 00:19:11.062 fused_ordering(982) 00:19:11.062 fused_ordering(983) 00:19:11.062 fused_ordering(984) 00:19:11.062 fused_ordering(985) 00:19:11.062 fused_ordering(986) 00:19:11.062 fused_ordering(987) 00:19:11.062 fused_ordering(988) 00:19:11.062 fused_ordering(989) 00:19:11.062 fused_ordering(990) 00:19:11.062 fused_ordering(991) 00:19:11.062 fused_ordering(992) 00:19:11.062 fused_ordering(993) 00:19:11.062 fused_ordering(994) 00:19:11.062 fused_ordering(995) 00:19:11.062 fused_ordering(996) 00:19:11.062 fused_ordering(997) 00:19:11.062 fused_ordering(998) 00:19:11.062 fused_ordering(999) 00:19:11.062 fused_ordering(1000) 00:19:11.062 fused_ordering(1001) 00:19:11.062 fused_ordering(1002) 00:19:11.062 fused_ordering(1003) 00:19:11.062 fused_ordering(1004) 00:19:11.062 fused_ordering(1005) 00:19:11.062 fused_ordering(1006) 00:19:11.062 fused_ordering(1007) 00:19:11.062 fused_ordering(1008) 00:19:11.062 fused_ordering(1009) 00:19:11.062 fused_ordering(1010) 00:19:11.062 fused_ordering(1011) 00:19:11.062 fused_ordering(1012) 00:19:11.062 fused_ordering(1013) 00:19:11.062 fused_ordering(1014) 00:19:11.062 fused_ordering(1015) 00:19:11.062 fused_ordering(1016) 00:19:11.062 fused_ordering(1017) 00:19:11.062 fused_ordering(1018) 00:19:11.062 fused_ordering(1019) 00:19:11.062 fused_ordering(1020) 00:19:11.062 fused_ordering(1021) 00:19:11.062 fused_ordering(1022) 00:19:11.062 fused_ordering(1023) 00:19:11.062 10:00:24 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:19:11.062 10:00:24 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:19:11.062 10:00:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:11.062 10:00:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:19:11.062 10:00:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:11.062 10:00:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:19:11.062 10:00:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:11.062 10:00:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:11.062 rmmod nvme_tcp 00:19:11.062 rmmod nvme_fabrics 00:19:11.062 rmmod nvme_keyring 00:19:11.062 10:00:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:11.062 10:00:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:19:11.062 10:00:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:19:11.062 10:00:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 71589 ']' 00:19:11.062 10:00:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 71589 00:19:11.062 10:00:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 71589 ']' 00:19:11.062 10:00:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 71589 00:19:11.062 10:00:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:19:11.062 10:00:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:11.062 10:00:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71589 00:19:11.062 10:00:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:11.062 10:00:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:11.062 killing process with pid 71589 00:19:11.062 10:00:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71589' 00:19:11.062 10:00:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 71589 00:19:11.062 10:00:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 71589 00:19:11.319 10:00:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:11.319 10:00:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:11.319 10:00:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:11.319 10:00:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:11.319 10:00:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:11.319 10:00:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:11.319 10:00:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:11.319 10:00:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:11.319 10:00:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:11.319 00:19:11.319 real 0m3.639s 00:19:11.319 user 0m4.273s 00:19:11.319 sys 0m1.125s 00:19:11.319 10:00:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:11.319 10:00:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:11.319 ************************************ 00:19:11.319 END TEST nvmf_fused_ordering 00:19:11.319 ************************************ 00:19:11.577 10:00:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:11.577 10:00:24 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:19:11.577 10:00:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:11.577 10:00:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:11.577 10:00:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:11.577 ************************************ 00:19:11.577 START TEST nvmf_delete_subsystem 00:19:11.577 ************************************ 00:19:11.577 10:00:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:19:11.577 * Looking for test storage... 00:19:11.577 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:11.577 Cannot find device "nvmf_tgt_br" 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # true 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:11.577 Cannot find device "nvmf_tgt_br2" 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # true 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:11.577 Cannot find device "nvmf_tgt_br" 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # true 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:11.577 Cannot find device "nvmf_tgt_br2" 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # true 00:19:11.577 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:11.835 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:11.835 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:11.835 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:11.835 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:19:11.835 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:11.835 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:11.835 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:19:11.835 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:11.835 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:11.835 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:11.835 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:11.835 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:11.835 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:11.835 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:11.835 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:11.835 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:11.835 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:11.835 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:11.835 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:11.835 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:11.835 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:11.835 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:11.835 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:11.835 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:11.835 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:11.835 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:11.835 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:11.835 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:11.835 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:11.835 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:11.835 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:11.835 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:11.835 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:19:11.835 00:19:11.835 --- 10.0.0.2 ping statistics --- 00:19:11.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:11.835 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:19:11.835 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:11.835 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:11.835 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.117 ms 00:19:11.835 00:19:11.835 --- 10.0.0.3 ping statistics --- 00:19:11.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:11.835 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:19:11.835 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:11.835 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:11.835 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:19:11.835 00:19:11.835 --- 10.0.0.1 ping statistics --- 00:19:11.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:11.835 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:19:11.835 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:11.835 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@433 -- # return 0 00:19:11.835 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:11.835 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:11.835 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:11.835 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:11.835 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:11.835 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:11.835 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:11.835 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:19:11.835 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:11.835 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:11.835 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:19:12.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:12.093 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=71833 00:19:12.093 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 71833 00:19:12.093 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 71833 ']' 00:19:12.093 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:12.093 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:12.093 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:12.093 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:12.093 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:12.093 10:00:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:19:12.093 [2024-07-15 10:00:25.480620] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:19:12.093 [2024-07-15 10:00:25.480833] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:12.093 [2024-07-15 10:00:25.619381] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:12.351 [2024-07-15 10:00:25.726164] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:12.351 [2024-07-15 10:00:25.726297] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:12.351 [2024-07-15 10:00:25.726331] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:12.351 [2024-07-15 10:00:25.726358] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:12.351 [2024-07-15 10:00:25.726374] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:12.351 [2024-07-15 10:00:25.726619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:12.351 [2024-07-15 10:00:25.726624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:12.920 10:00:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:12.920 10:00:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:19:12.920 10:00:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:12.920 10:00:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:12.920 10:00:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:19:12.920 10:00:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:12.920 10:00:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:12.920 10:00:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.920 10:00:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:19:12.920 [2024-07-15 10:00:26.424037] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:12.921 10:00:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.921 10:00:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:19:12.921 10:00:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.921 10:00:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:19:12.921 10:00:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.921 10:00:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:12.921 10:00:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.921 10:00:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:19:12.921 [2024-07-15 10:00:26.448124] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:12.921 10:00:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.921 10:00:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:19:12.921 10:00:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.921 10:00:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:19:12.921 NULL1 00:19:12.921 10:00:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.921 10:00:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:19:12.921 10:00:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.921 10:00:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:19:12.921 Delay0 00:19:12.921 10:00:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.921 10:00:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:12.921 10:00:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.921 10:00:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:19:12.921 10:00:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.921 10:00:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=71884 00:19:12.921 10:00:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:19:12.921 10:00:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:19:13.183 [2024-07-15 10:00:26.664329] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:15.087 10:00:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:15.087 10:00:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.087 10:00:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 starting I/O failed: -6 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Write completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 starting I/O failed: -6 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Write completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 starting I/O failed: -6 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Write completed with error (sct=0, sc=8) 00:19:15.345 starting I/O failed: -6 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 starting I/O failed: -6 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 starting I/O failed: -6 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Write completed with error (sct=0, sc=8) 00:19:15.345 starting I/O failed: -6 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Write completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 starting I/O failed: -6 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Write completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Write completed with error (sct=0, sc=8) 00:19:15.345 starting I/O failed: -6 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Write completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 starting I/O failed: -6 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Write completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 starting I/O failed: -6 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Write completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Write completed with error (sct=0, sc=8) 00:19:15.345 starting I/O failed: -6 00:19:15.345 Write completed with error (sct=0, sc=8) 00:19:15.345 [2024-07-15 10:00:28.702910] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2f6f0 is same with the state(5) to be set 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Write completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Write completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Write completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Write completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 starting I/O failed: -6 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 starting I/O failed: -6 00:19:15.345 Write completed with error (sct=0, sc=8) 00:19:15.345 Write completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Write completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 starting I/O failed: -6 00:19:15.345 Write completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Write completed with error (sct=0, sc=8) 00:19:15.345 starting I/O failed: -6 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Write completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 starting I/O failed: -6 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Write completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 starting I/O failed: -6 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Write completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 starting I/O failed: -6 00:19:15.345 Write completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 Read completed with error (sct=0, sc=8) 00:19:15.345 starting I/O failed: -6 00:19:15.345 Write completed with error (sct=0, sc=8) 00:19:15.346 Read completed with error (sct=0, sc=8) 00:19:15.346 Read completed with error (sct=0, sc=8) 00:19:15.346 Write completed with error (sct=0, sc=8) 00:19:15.346 starting I/O failed: -6 00:19:15.346 Read completed with error (sct=0, sc=8) 00:19:15.346 Read completed with error (sct=0, sc=8) 00:19:15.346 Read completed with error (sct=0, sc=8) 00:19:15.346 Read completed with error (sct=0, sc=8) 00:19:15.346 starting I/O failed: -6 00:19:15.346 Write completed with error (sct=0, sc=8) 00:19:15.346 Read completed with error (sct=0, sc=8) 00:19:15.346 Write completed with error (sct=0, sc=8) 00:19:15.346 Read completed with error (sct=0, sc=8) 00:19:15.346 starting I/O failed: -6 00:19:15.346 Write completed with error (sct=0, sc=8) 00:19:15.346 Read completed with error (sct=0, sc=8) 00:19:15.346 Read completed with error (sct=0, sc=8) 00:19:15.346 Write completed with error (sct=0, sc=8) 00:19:15.346 starting I/O failed: -6 00:19:15.346 [2024-07-15 10:00:28.704529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5ea4000c00 is same with the state(5) to be set 00:19:15.346 Read completed with error (sct=0, sc=8) 00:19:15.346 Read completed with error (sct=0, sc=8) 00:19:15.346 Write completed with error (sct=0, sc=8) 00:19:15.346 Read completed with error (sct=0, sc=8) 00:19:15.346 Read completed with error (sct=0, sc=8) 00:19:15.346 Write completed with error (sct=0, sc=8) 00:19:15.346 Read completed with error (sct=0, sc=8) 00:19:15.346 Read completed with error (sct=0, sc=8) 00:19:15.346 Read completed with error (sct=0, sc=8) 00:19:15.346 Read completed with error (sct=0, sc=8) 00:19:15.346 Read completed with error (sct=0, sc=8) 00:19:15.346 Write completed with error (sct=0, sc=8) 00:19:15.346 Read completed with error (sct=0, sc=8) 00:19:15.346 Read completed with error (sct=0, sc=8) 00:19:15.346 Read completed with error (sct=0, sc=8) 00:19:15.346 Read completed with error (sct=0, sc=8) 00:19:15.346 Write completed with error (sct=0, sc=8) 00:19:15.346 Write completed with error (sct=0, sc=8) 00:19:15.346 Write completed with error (sct=0, sc=8) 00:19:15.346 Read completed with error (sct=0, sc=8) 00:19:15.346 Read completed with error (sct=0, sc=8) 00:19:15.346 Read completed with error (sct=0, sc=8) 00:19:15.346 Read completed with error (sct=0, sc=8) 00:19:15.346 Read completed with error (sct=0, sc=8) 00:19:15.346 Read completed with error (sct=0, sc=8) 00:19:15.346 Read completed with error (sct=0, sc=8) 00:19:15.346 Read completed with error (sct=0, sc=8) 00:19:15.346 Read completed with error (sct=0, sc=8) 00:19:15.346 Read completed with error (sct=0, sc=8) 00:19:15.346 Write completed with error (sct=0, sc=8) 00:19:15.346 Read completed with error (sct=0, sc=8) 00:19:15.346 Read completed with error (sct=0, sc=8) 00:19:15.346 Write completed with error (sct=0, sc=8) 00:19:15.346 Write completed with error (sct=0, sc=8) 00:19:15.346 Read completed with error (sct=0, sc=8) 00:19:15.346 Read completed with error (sct=0, sc=8) 00:19:15.346 Read completed with error (sct=0, sc=8) 00:19:15.346 Read completed with error (sct=0, sc=8) 00:19:15.346 Read completed with error (sct=0, sc=8) 00:19:15.346 Read completed with error (sct=0, sc=8) 00:19:15.346 Write completed with error (sct=0, sc=8) 00:19:15.346 Read completed with error (sct=0, sc=8) 00:19:15.346 Read completed with error (sct=0, sc=8) 00:19:15.346 Write completed with error (sct=0, sc=8) 00:19:15.346 Write completed with error (sct=0, sc=8) 00:19:15.346 Read completed with error (sct=0, sc=8) 00:19:15.346 Read completed with error (sct=0, sc=8) 00:19:15.346 Write completed with error (sct=0, sc=8) 00:19:15.346 Read completed with error (sct=0, sc=8) 00:19:15.346 Read completed with error (sct=0, sc=8) 00:19:15.346 Write completed with error (sct=0, sc=8) 00:19:15.346 Read completed with error (sct=0, sc=8) 00:19:15.346 Read completed with error (sct=0, sc=8) 00:19:15.346 Write completed with error (sct=0, sc=8) 00:19:15.346 Read completed with error (sct=0, sc=8) 00:19:15.346 Read completed with error (sct=0, sc=8) 00:19:15.346 Read completed with error (sct=0, sc=8) 00:19:15.346 Read completed with error (sct=0, sc=8) 00:19:16.287 [2024-07-15 10:00:29.677395] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2f510 is same with the state(5) to be set 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Write completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Write completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Write completed with error (sct=0, sc=8) 00:19:16.287 Write completed with error (sct=0, sc=8) 00:19:16.287 Write completed with error (sct=0, sc=8) 00:19:16.287 Write completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Write completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Write completed with error (sct=0, sc=8) 00:19:16.287 Write completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Write completed with error (sct=0, sc=8) 00:19:16.287 [2024-07-15 10:00:29.697637] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b514c0 is same with the state(5) to be set 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Write completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Write completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Write completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Write completed with error (sct=0, sc=8) 00:19:16.287 Write completed with error (sct=0, sc=8) 00:19:16.287 [2024-07-15 10:00:29.698866] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b52a80 is same with the state(5) to be set 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Write completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Write completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Write completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Write completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Write completed with error (sct=0, sc=8) 00:19:16.287 [2024-07-15 10:00:29.699539] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5ea400cfe0 is same with the state(5) to be set 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Write completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Write completed with error (sct=0, sc=8) 00:19:16.287 Write completed with error (sct=0, sc=8) 00:19:16.287 Write completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 Write completed with error (sct=0, sc=8) 00:19:16.287 Read completed with error (sct=0, sc=8) 00:19:16.287 [2024-07-15 10:00:29.699720] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5ea400d600 is same with the state(5) to be set 00:19:16.287 Initializing NVMe Controllers 00:19:16.287 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:16.287 Controller IO queue size 128, less than required. 00:19:16.287 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:16.287 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:19:16.287 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:19:16.287 Initialization complete. Launching workers. 00:19:16.287 ======================================================== 00:19:16.287 Latency(us) 00:19:16.287 Device Information : IOPS MiB/s Average min max 00:19:16.287 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 175.87 0.09 882938.71 1474.71 1017206.55 00:19:16.287 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 172.90 0.08 889720.97 367.97 1018001.18 00:19:16.287 ======================================================== 00:19:16.287 Total : 348.77 0.17 886300.94 367.97 1018001.18 00:19:16.287 00:19:16.287 [2024-07-15 10:00:29.700526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b2f510 (9): Bad file descriptor 00:19:16.287 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:19:16.287 10:00:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.287 10:00:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:19:16.287 10:00:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 71884 00:19:16.287 10:00:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:19:16.858 10:00:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:19:16.858 10:00:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 71884 00:19:16.858 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (71884) - No such process 00:19:16.858 10:00:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 71884 00:19:16.858 10:00:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:19:16.858 10:00:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 71884 00:19:16.858 10:00:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:19:16.858 10:00:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:16.858 10:00:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:19:16.858 10:00:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:16.858 10:00:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 71884 00:19:16.858 10:00:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:19:16.858 10:00:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:16.858 10:00:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:16.858 10:00:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:16.858 10:00:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:19:16.858 10:00:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.858 10:00:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:19:16.858 10:00:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.858 10:00:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:16.858 10:00:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.858 10:00:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:19:16.858 [2024-07-15 10:00:30.237924] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:16.858 10:00:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.858 10:00:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:16.858 10:00:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.858 10:00:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:19:16.858 10:00:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.858 10:00:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=71930 00:19:16.858 10:00:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:19:16.858 10:00:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:19:16.858 10:00:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71930 00:19:16.858 10:00:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:19:16.858 [2024-07-15 10:00:30.420478] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:17.425 10:00:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:19:17.425 10:00:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71930 00:19:17.425 10:00:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:19:17.691 10:00:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:19:17.691 10:00:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71930 00:19:17.691 10:00:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:19:18.257 10:00:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:19:18.257 10:00:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71930 00:19:18.257 10:00:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:19:18.822 10:00:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:19:18.822 10:00:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71930 00:19:18.822 10:00:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:19:19.388 10:00:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:19:19.388 10:00:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71930 00:19:19.388 10:00:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:19:19.956 10:00:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:19:19.956 10:00:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71930 00:19:19.956 10:00:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:19:19.956 Initializing NVMe Controllers 00:19:19.956 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:19.956 Controller IO queue size 128, less than required. 00:19:19.956 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:19.956 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:19:19.956 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:19:19.956 Initialization complete. Launching workers. 00:19:19.956 ======================================================== 00:19:19.956 Latency(us) 00:19:19.956 Device Information : IOPS MiB/s Average min max 00:19:19.956 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003091.51 1000129.65 1041054.02 00:19:19.956 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004425.64 1000139.78 1012826.02 00:19:19.956 ======================================================== 00:19:19.956 Total : 256.00 0.12 1003758.57 1000129.65 1041054.02 00:19:19.956 00:19:20.215 10:00:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:19:20.215 10:00:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71930 00:19:20.215 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (71930) - No such process 00:19:20.215 10:00:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 71930 00:19:20.215 10:00:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:19:20.215 10:00:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:19:20.215 10:00:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:20.215 10:00:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:19:20.473 10:00:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:20.473 10:00:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:19:20.473 10:00:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:20.473 10:00:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:20.473 rmmod nvme_tcp 00:19:20.473 rmmod nvme_fabrics 00:19:20.473 rmmod nvme_keyring 00:19:20.473 10:00:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:20.473 10:00:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:19:20.473 10:00:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:19:20.473 10:00:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 71833 ']' 00:19:20.473 10:00:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 71833 00:19:20.473 10:00:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 71833 ']' 00:19:20.474 10:00:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 71833 00:19:20.474 10:00:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:19:20.474 10:00:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:20.474 10:00:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71833 00:19:20.474 10:00:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:20.474 10:00:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:20.474 10:00:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71833' 00:19:20.474 killing process with pid 71833 00:19:20.474 10:00:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 71833 00:19:20.474 10:00:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 71833 00:19:20.733 10:00:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:20.733 10:00:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:20.733 10:00:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:20.733 10:00:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:20.733 10:00:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:20.733 10:00:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:20.733 10:00:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:20.733 10:00:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:20.733 10:00:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:20.733 00:19:20.733 real 0m9.263s 00:19:20.733 user 0m28.981s 00:19:20.733 sys 0m1.118s 00:19:20.733 10:00:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:20.733 10:00:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:19:20.733 ************************************ 00:19:20.733 END TEST nvmf_delete_subsystem 00:19:20.733 ************************************ 00:19:20.733 10:00:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:20.733 10:00:34 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:19:20.733 10:00:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:20.733 10:00:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:20.733 10:00:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:20.733 ************************************ 00:19:20.733 START TEST nvmf_ns_masking 00:19:20.733 ************************************ 00:19:20.733 10:00:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:19:20.993 * Looking for test storage... 00:19:20.993 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:20.993 10:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:20.993 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:19:20.993 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:20.993 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:20.993 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:20.993 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:20.993 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:20.993 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:20.993 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:20.993 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:20.993 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:20.993 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:20.993 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:19:20.993 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:19:20.993 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:20.993 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:20.993 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:20.993 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:20.993 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:20.993 10:00:34 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:20.993 10:00:34 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:20.993 10:00:34 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:20.993 10:00:34 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.993 10:00:34 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.993 10:00:34 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.993 10:00:34 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:19:20.993 10:00:34 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.993 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:19:20.993 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:20.993 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:20.993 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:20.993 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:20.993 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:20.993 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:20.994 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:20.994 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:20.994 10:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:20.994 10:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:19:20.994 10:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:19:20.994 10:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:19:20.994 10:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=ad25d68b-ffdd-494f-a982-d9068264f93e 00:19:20.994 10:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:19:20.994 10:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=ed3ff89e-e168-4c92-84dc-ed54a5b689f9 00:19:20.994 10:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:19:20.994 10:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:19:20.994 10:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:19:20.994 10:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:19:20.994 10:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=92a77c08-91d9-4060-96b2-0f1cffd52d0d 00:19:20.994 10:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:19:20.994 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:20.994 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:20.994 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:20.994 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:20.994 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:20.994 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:20.994 10:00:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:20.994 10:00:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:20.994 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:20.994 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:20.994 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:20.994 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:20.994 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:20.994 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:20.994 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:20.994 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:20.994 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:20.994 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:20.994 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:20.994 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:20.994 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:20.994 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:20.994 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:20.994 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:20.994 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:20.994 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:20.994 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:20.994 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:20.994 Cannot find device "nvmf_tgt_br" 00:19:20.994 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@155 -- # true 00:19:20.994 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:20.994 Cannot find device "nvmf_tgt_br2" 00:19:20.994 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@156 -- # true 00:19:20.994 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:20.994 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:20.994 Cannot find device "nvmf_tgt_br" 00:19:20.994 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@158 -- # true 00:19:20.994 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:20.994 Cannot find device "nvmf_tgt_br2" 00:19:20.994 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@159 -- # true 00:19:20.994 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:20.994 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:20.994 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:20.994 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:20.994 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@162 -- # true 00:19:20.994 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:20.994 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:20.994 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@163 -- # true 00:19:20.994 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:20.994 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:21.253 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:21.253 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:21.253 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:21.253 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:21.253 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:21.253 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:21.253 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:21.253 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:21.253 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:21.253 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:21.253 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:21.253 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:21.253 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:21.253 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:21.253 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:21.253 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:21.253 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:21.253 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:21.253 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:21.253 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:21.253 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:21.253 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:21.253 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:21.253 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:19:21.253 00:19:21.253 --- 10.0.0.2 ping statistics --- 00:19:21.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:21.253 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:19:21.253 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:21.253 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:21.253 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:19:21.253 00:19:21.253 --- 10.0.0.3 ping statistics --- 00:19:21.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:21.253 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:19:21.253 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:21.253 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:21.253 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:19:21.253 00:19:21.253 --- 10.0.0.1 ping statistics --- 00:19:21.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:21.253 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:19:21.253 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:21.253 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@433 -- # return 0 00:19:21.253 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:21.253 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:21.253 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:21.253 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:21.253 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:21.253 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:21.253 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:21.253 10:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:19:21.253 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:21.253 10:00:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:21.253 10:00:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:21.253 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=72161 00:19:21.253 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:21.253 10:00:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 72161 00:19:21.253 10:00:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 72161 ']' 00:19:21.253 10:00:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:21.253 10:00:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:21.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:21.253 10:00:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:21.253 10:00:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:21.253 10:00:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:21.510 [2024-07-15 10:00:34.847799] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:19:21.510 [2024-07-15 10:00:34.847886] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:21.510 [2024-07-15 10:00:34.988272] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.767 [2024-07-15 10:00:35.097567] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:21.767 [2024-07-15 10:00:35.097616] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:21.767 [2024-07-15 10:00:35.097624] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:21.767 [2024-07-15 10:00:35.097629] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:21.767 [2024-07-15 10:00:35.097634] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:21.767 [2024-07-15 10:00:35.097673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.351 10:00:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:22.351 10:00:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:19:22.351 10:00:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:22.351 10:00:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:22.351 10:00:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:22.351 10:00:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:22.351 10:00:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:22.610 [2024-07-15 10:00:36.059913] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:22.610 10:00:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:19:22.610 10:00:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:19:22.610 10:00:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:22.869 Malloc1 00:19:22.870 10:00:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:19:23.130 Malloc2 00:19:23.130 10:00:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:23.388 10:00:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:19:23.645 10:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:23.905 [2024-07-15 10:00:37.269003] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:23.905 10:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:19:23.905 10:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 92a77c08-91d9-4060-96b2-0f1cffd52d0d -a 10.0.0.2 -s 4420 -i 4 00:19:23.905 10:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:19:23.905 10:00:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:19:23.905 10:00:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:23.905 10:00:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:23.905 10:00:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:19:25.868 10:00:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:25.868 10:00:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:25.868 10:00:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:25.868 10:00:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:25.868 10:00:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:25.868 10:00:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:19:25.868 10:00:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:25.868 10:00:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:19:26.127 10:00:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:19:26.127 10:00:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:19:26.127 10:00:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:19:26.127 10:00:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:26.127 10:00:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:26.127 [ 0]:0x1 00:19:26.127 10:00:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:26.127 10:00:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:26.127 10:00:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2ba12b7fac034cfdaa1435080b54e7f3 00:19:26.127 10:00:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2ba12b7fac034cfdaa1435080b54e7f3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:26.127 10:00:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:19:26.386 10:00:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:19:26.386 10:00:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:26.386 10:00:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:26.386 [ 0]:0x1 00:19:26.386 10:00:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:26.386 10:00:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:26.386 10:00:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2ba12b7fac034cfdaa1435080b54e7f3 00:19:26.386 10:00:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2ba12b7fac034cfdaa1435080b54e7f3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:26.386 10:00:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:19:26.386 10:00:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:26.386 10:00:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:26.386 [ 1]:0x2 00:19:26.386 10:00:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:26.386 10:00:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:26.386 10:00:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7406621f26a042249da3f189c7ed89bd 00:19:26.386 10:00:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7406621f26a042249da3f189c7ed89bd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:26.386 10:00:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:19:26.386 10:00:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:26.386 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:26.386 10:00:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:26.645 10:00:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:19:26.904 10:00:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:19:26.904 10:00:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 92a77c08-91d9-4060-96b2-0f1cffd52d0d -a 10.0.0.2 -s 4420 -i 4 00:19:26.904 10:00:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:19:26.904 10:00:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:19:26.904 10:00:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:26.904 10:00:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:19:26.904 10:00:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:19:26.904 10:00:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:19:29.477 10:00:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:29.477 10:00:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:29.477 10:00:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:29.477 10:00:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:29.477 10:00:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:29.477 10:00:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:19:29.477 10:00:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:19:29.477 10:00:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:29.477 10:00:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:19:29.477 10:00:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:19:29.477 10:00:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:19:29.477 10:00:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:19:29.477 10:00:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:19:29.477 10:00:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:19:29.477 10:00:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:29.477 10:00:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:19:29.477 10:00:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:29.477 10:00:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:19:29.477 10:00:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:29.477 10:00:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:29.477 10:00:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:29.477 10:00:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:29.477 10:00:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:29.477 10:00:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:29.477 10:00:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:19:29.477 10:00:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:29.477 10:00:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:29.477 10:00:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:29.477 10:00:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:19:29.477 10:00:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:29.477 10:00:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:29.477 [ 0]:0x2 00:19:29.477 10:00:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:29.477 10:00:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:29.477 10:00:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7406621f26a042249da3f189c7ed89bd 00:19:29.477 10:00:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7406621f26a042249da3f189c7ed89bd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:29.477 10:00:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:29.477 10:00:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:19:29.477 10:00:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:29.477 10:00:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:29.477 [ 0]:0x1 00:19:29.477 10:00:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:29.477 10:00:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:29.477 10:00:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2ba12b7fac034cfdaa1435080b54e7f3 00:19:29.477 10:00:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2ba12b7fac034cfdaa1435080b54e7f3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:29.477 10:00:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:19:29.477 10:00:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:29.477 10:00:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:29.477 [ 1]:0x2 00:19:29.477 10:00:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:29.477 10:00:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:29.477 10:00:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7406621f26a042249da3f189c7ed89bd 00:19:29.477 10:00:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7406621f26a042249da3f189c7ed89bd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:29.477 10:00:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:29.735 10:00:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:19:29.735 10:00:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:19:29.735 10:00:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:19:29.735 10:00:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:19:29.735 10:00:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:29.735 10:00:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:19:29.735 10:00:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:29.735 10:00:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:19:29.735 10:00:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:29.735 10:00:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:29.735 10:00:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:29.735 10:00:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:29.735 10:00:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:29.735 10:00:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:29.735 10:00:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:19:29.735 10:00:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:29.735 10:00:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:29.735 10:00:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:29.735 10:00:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:19:29.735 10:00:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:29.735 10:00:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:29.735 [ 0]:0x2 00:19:29.735 10:00:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:29.735 10:00:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:29.992 10:00:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7406621f26a042249da3f189c7ed89bd 00:19:29.992 10:00:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7406621f26a042249da3f189c7ed89bd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:29.992 10:00:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:19:29.992 10:00:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:29.992 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:29.992 10:00:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:30.250 10:00:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:19:30.250 10:00:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 92a77c08-91d9-4060-96b2-0f1cffd52d0d -a 10.0.0.2 -s 4420 -i 4 00:19:30.250 10:00:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:19:30.250 10:00:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:19:30.250 10:00:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:30.250 10:00:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:19:30.250 10:00:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:19:30.250 10:00:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:19:32.149 10:00:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:32.149 10:00:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:32.149 10:00:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:32.149 10:00:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:19:32.149 10:00:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:32.149 10:00:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:19:32.149 10:00:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:32.149 10:00:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:19:32.407 10:00:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:19:32.407 10:00:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:19:32.407 10:00:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:19:32.407 10:00:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:32.407 10:00:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:32.407 [ 0]:0x1 00:19:32.407 10:00:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:32.407 10:00:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:32.407 10:00:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2ba12b7fac034cfdaa1435080b54e7f3 00:19:32.407 10:00:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2ba12b7fac034cfdaa1435080b54e7f3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:32.407 10:00:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:19:32.407 10:00:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:32.407 10:00:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:32.407 [ 1]:0x2 00:19:32.407 10:00:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:32.407 10:00:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:32.407 10:00:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7406621f26a042249da3f189c7ed89bd 00:19:32.407 10:00:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7406621f26a042249da3f189c7ed89bd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:32.407 10:00:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:32.663 10:00:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:19:32.663 10:00:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:19:32.663 10:00:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:19:32.663 10:00:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:19:32.663 10:00:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:32.663 10:00:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:19:32.663 10:00:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:32.663 10:00:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:19:32.663 10:00:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:32.663 10:00:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:32.663 10:00:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:32.663 10:00:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:32.663 10:00:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:32.663 10:00:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:32.663 10:00:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:19:32.663 10:00:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:32.663 10:00:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:32.663 10:00:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:32.663 10:00:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:19:32.663 10:00:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:32.663 10:00:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:32.663 [ 0]:0x2 00:19:32.663 10:00:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:32.663 10:00:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:32.921 10:00:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7406621f26a042249da3f189c7ed89bd 00:19:32.921 10:00:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7406621f26a042249da3f189c7ed89bd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:32.921 10:00:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:32.921 10:00:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:19:32.921 10:00:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:32.921 10:00:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:32.921 10:00:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:32.921 10:00:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:32.921 10:00:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:32.921 10:00:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:32.921 10:00:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:32.921 10:00:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:32.921 10:00:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:19:32.921 10:00:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:32.921 [2024-07-15 10:00:46.468731] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:19:32.921 2024/07/15 10:00:46 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:19:32.921 request: 00:19:32.921 { 00:19:32.921 "method": "nvmf_ns_remove_host", 00:19:32.921 "params": { 00:19:32.921 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:32.921 "nsid": 2, 00:19:32.921 "host": "nqn.2016-06.io.spdk:host1" 00:19:32.921 } 00:19:32.921 } 00:19:32.921 Got JSON-RPC error response 00:19:32.921 GoRPCClient: error on JSON-RPC call 00:19:32.921 10:00:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:19:32.921 10:00:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:32.921 10:00:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:32.921 10:00:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:32.921 10:00:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:19:32.921 10:00:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:19:32.921 10:00:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:19:32.921 10:00:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:19:32.921 10:00:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:32.921 10:00:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:19:32.921 10:00:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:32.921 10:00:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:19:32.921 10:00:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:32.921 10:00:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:33.178 10:00:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:33.178 10:00:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:33.178 10:00:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:33.178 10:00:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:33.178 10:00:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:19:33.178 10:00:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:33.178 10:00:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:33.178 10:00:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:33.178 10:00:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:19:33.178 10:00:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:33.178 10:00:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:33.178 [ 0]:0x2 00:19:33.178 10:00:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:33.178 10:00:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:33.179 10:00:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7406621f26a042249da3f189c7ed89bd 00:19:33.179 10:00:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7406621f26a042249da3f189c7ed89bd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:33.179 10:00:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:19:33.179 10:00:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:33.179 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:33.179 10:00:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=72535 00:19:33.179 10:00:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:19:33.179 10:00:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:19:33.179 10:00:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 72535 /var/tmp/host.sock 00:19:33.179 10:00:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 72535 ']' 00:19:33.179 10:00:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:19:33.179 10:00:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:33.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:33.179 10:00:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:33.179 10:00:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:33.179 10:00:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:33.179 [2024-07-15 10:00:46.705091] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:19:33.179 [2024-07-15 10:00:46.705177] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72535 ] 00:19:33.436 [2024-07-15 10:00:46.829181] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.436 [2024-07-15 10:00:46.968498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:34.391 10:00:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:34.391 10:00:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:19:34.391 10:00:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:34.392 10:00:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:19:34.649 10:00:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid ad25d68b-ffdd-494f-a982-d9068264f93e 00:19:34.649 10:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:19:34.649 10:00:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g AD25D68BFFDD494FA982D9068264F93E -i 00:19:34.907 10:00:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid ed3ff89e-e168-4c92-84dc-ed54a5b689f9 00:19:34.907 10:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:19:34.907 10:00:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g ED3FF89EE1684C9284DCED54A5B689F9 -i 00:19:35.165 10:00:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:35.422 10:00:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:19:35.680 10:00:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:19:35.680 10:00:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:19:35.938 nvme0n1 00:19:35.938 10:00:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:19:35.938 10:00:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:19:36.502 nvme1n2 00:19:36.502 10:00:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:19:36.502 10:00:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:19:36.502 10:00:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:19:36.502 10:00:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:19:36.502 10:00:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:19:36.502 10:00:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:19:36.502 10:00:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:19:36.502 10:00:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:19:36.502 10:00:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:19:36.761 10:00:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ ad25d68b-ffdd-494f-a982-d9068264f93e == \a\d\2\5\d\6\8\b\-\f\f\d\d\-\4\9\4\f\-\a\9\8\2\-\d\9\0\6\8\2\6\4\f\9\3\e ]] 00:19:36.761 10:00:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:19:36.761 10:00:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:19:36.761 10:00:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:19:37.328 10:00:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ ed3ff89e-e168-4c92-84dc-ed54a5b689f9 == \e\d\3\f\f\8\9\e\-\e\1\6\8\-\4\c\9\2\-\8\4\d\c\-\e\d\5\4\a\5\b\6\8\9\f\9 ]] 00:19:37.328 10:00:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 72535 00:19:37.328 10:00:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 72535 ']' 00:19:37.328 10:00:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 72535 00:19:37.328 10:00:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:19:37.328 10:00:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:37.328 10:00:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72535 00:19:37.328 10:00:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:37.328 10:00:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:37.328 10:00:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72535' 00:19:37.328 killing process with pid 72535 00:19:37.328 10:00:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 72535 00:19:37.328 10:00:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 72535 00:19:37.586 10:00:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:37.847 10:00:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:19:37.847 10:00:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:19:37.847 10:00:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:37.847 10:00:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:19:37.847 10:00:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:37.847 10:00:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:19:37.847 10:00:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:37.847 10:00:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:37.847 rmmod nvme_tcp 00:19:37.847 rmmod nvme_fabrics 00:19:37.847 rmmod nvme_keyring 00:19:37.847 10:00:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:37.847 10:00:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:19:37.847 10:00:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:19:37.847 10:00:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 72161 ']' 00:19:37.847 10:00:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 72161 00:19:37.847 10:00:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 72161 ']' 00:19:37.847 10:00:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 72161 00:19:37.847 10:00:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:19:37.847 10:00:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:37.847 10:00:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72161 00:19:37.847 10:00:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:37.847 10:00:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:37.847 killing process with pid 72161 00:19:37.847 10:00:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72161' 00:19:37.847 10:00:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 72161 00:19:37.847 10:00:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 72161 00:19:38.109 10:00:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:38.109 10:00:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:38.109 10:00:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:38.109 10:00:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:38.109 10:00:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:38.109 10:00:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:38.109 10:00:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:38.109 10:00:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:38.109 10:00:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:38.109 ************************************ 00:19:38.109 END TEST nvmf_ns_masking 00:19:38.109 ************************************ 00:19:38.109 00:19:38.109 real 0m17.443s 00:19:38.109 user 0m27.455s 00:19:38.109 sys 0m2.714s 00:19:38.109 10:00:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:38.109 10:00:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:38.369 10:00:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:38.369 10:00:51 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 0 -eq 1 ]] 00:19:38.369 10:00:51 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:19:38.369 10:00:51 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:19:38.369 10:00:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:38.369 10:00:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:38.369 10:00:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:38.369 ************************************ 00:19:38.369 START TEST nvmf_host_management 00:19:38.369 ************************************ 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:19:38.369 * Looking for test storage... 00:19:38.369 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:38.369 Cannot find device "nvmf_tgt_br" 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:19:38.369 10:00:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:38.628 Cannot find device "nvmf_tgt_br2" 00:19:38.628 10:00:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:19:38.628 10:00:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:38.628 10:00:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:38.628 Cannot find device "nvmf_tgt_br" 00:19:38.628 10:00:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:19:38.628 10:00:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:38.628 Cannot find device "nvmf_tgt_br2" 00:19:38.628 10:00:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:19:38.628 10:00:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:38.628 10:00:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:38.628 10:00:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:38.628 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:38.628 10:00:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:19:38.628 10:00:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:38.628 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:38.628 10:00:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:19:38.628 10:00:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:38.628 10:00:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:38.628 10:00:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:38.628 10:00:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:38.628 10:00:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:38.628 10:00:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:38.628 10:00:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:38.628 10:00:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:38.628 10:00:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:38.628 10:00:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:38.628 10:00:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:38.628 10:00:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:38.628 10:00:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:38.628 10:00:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:38.628 10:00:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:38.628 10:00:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:38.628 10:00:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:38.628 10:00:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:38.628 10:00:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:38.628 10:00:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:38.628 10:00:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:38.628 10:00:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:38.887 10:00:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:38.887 10:00:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:38.887 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:38.887 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:19:38.887 00:19:38.887 --- 10.0.0.2 ping statistics --- 00:19:38.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:38.887 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:19:38.887 10:00:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:38.887 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:38.887 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.120 ms 00:19:38.887 00:19:38.887 --- 10.0.0.3 ping statistics --- 00:19:38.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:38.887 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:19:38.887 10:00:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:38.887 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:38.887 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:19:38.887 00:19:38.887 --- 10.0.0.1 ping statistics --- 00:19:38.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:38.887 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:19:38.887 10:00:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:38.887 10:00:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:19:38.887 10:00:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:38.887 10:00:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:38.887 10:00:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:38.887 10:00:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:38.887 10:00:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:38.887 10:00:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:38.887 10:00:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:38.887 10:00:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:19:38.887 10:00:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:19:38.887 10:00:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:19:38.887 10:00:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:38.887 10:00:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:38.887 10:00:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:38.887 10:00:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=72898 00:19:38.887 10:00:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:38.887 10:00:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 72898 00:19:38.887 10:00:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 72898 ']' 00:19:38.887 10:00:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:38.887 10:00:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:38.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:38.887 10:00:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:38.887 10:00:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:38.887 10:00:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:38.887 [2024-07-15 10:00:52.347363] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:19:38.887 [2024-07-15 10:00:52.347447] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:39.146 [2024-07-15 10:00:52.489923] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:39.146 [2024-07-15 10:00:52.603613] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:39.146 [2024-07-15 10:00:52.603687] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:39.146 [2024-07-15 10:00:52.603694] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:39.146 [2024-07-15 10:00:52.603700] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:39.146 [2024-07-15 10:00:52.603704] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:39.146 [2024-07-15 10:00:52.603791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:39.146 [2024-07-15 10:00:52.604037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:39.146 [2024-07-15 10:00:52.605265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:39.146 [2024-07-15 10:00:52.605266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:39.716 10:00:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:39.716 10:00:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:19:39.716 10:00:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:39.716 10:00:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:39.716 10:00:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:39.992 10:00:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:39.992 10:00:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:39.992 10:00:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.992 10:00:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:39.992 [2024-07-15 10:00:53.334472] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:39.992 10:00:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.992 10:00:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:19:39.992 10:00:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:39.992 10:00:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:39.992 10:00:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:19:39.992 10:00:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:19:39.992 10:00:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:19:39.992 10:00:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.992 10:00:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:39.992 Malloc0 00:19:39.992 [2024-07-15 10:00:53.410584] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:39.993 10:00:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.993 10:00:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:19:39.993 10:00:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:39.993 10:00:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:39.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:39.993 10:00:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=72972 00:19:39.993 10:00:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 72972 /var/tmp/bdevperf.sock 00:19:39.993 10:00:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 72972 ']' 00:19:39.993 10:00:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:39.993 10:00:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:19:39.993 10:00:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:19:39.993 10:00:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:39.993 10:00:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:19:39.993 10:00:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:39.993 10:00:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:19:39.993 10:00:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:39.993 10:00:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:39.993 10:00:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:39.993 10:00:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:39.993 { 00:19:39.993 "params": { 00:19:39.993 "name": "Nvme$subsystem", 00:19:39.993 "trtype": "$TEST_TRANSPORT", 00:19:39.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:39.993 "adrfam": "ipv4", 00:19:39.993 "trsvcid": "$NVMF_PORT", 00:19:39.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:39.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:39.993 "hdgst": ${hdgst:-false}, 00:19:39.993 "ddgst": ${ddgst:-false} 00:19:39.993 }, 00:19:39.993 "method": "bdev_nvme_attach_controller" 00:19:39.993 } 00:19:39.993 EOF 00:19:39.993 )") 00:19:39.993 10:00:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:19:39.993 10:00:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:19:39.993 10:00:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:19:39.994 10:00:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:39.994 "params": { 00:19:39.994 "name": "Nvme0", 00:19:39.994 "trtype": "tcp", 00:19:39.994 "traddr": "10.0.0.2", 00:19:39.994 "adrfam": "ipv4", 00:19:39.994 "trsvcid": "4420", 00:19:39.994 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:39.994 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:39.994 "hdgst": false, 00:19:39.994 "ddgst": false 00:19:39.994 }, 00:19:39.994 "method": "bdev_nvme_attach_controller" 00:19:39.994 }' 00:19:39.994 [2024-07-15 10:00:53.512316] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:19:39.994 [2024-07-15 10:00:53.512395] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72972 ] 00:19:40.257 [2024-07-15 10:00:53.637879] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.257 [2024-07-15 10:00:53.776300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.515 Running I/O for 10 seconds... 00:19:41.082 10:00:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:41.082 10:00:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:19:41.082 10:00:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:41.082 10:00:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.082 10:00:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:41.082 10:00:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.082 10:00:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:41.082 10:00:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:19:41.082 10:00:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:19:41.082 10:00:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:19:41.082 10:00:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:19:41.082 10:00:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:19:41.082 10:00:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:19:41.082 10:00:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:19:41.082 10:00:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:19:41.082 10:00:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:19:41.082 10:00:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.082 10:00:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:41.082 10:00:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.082 10:00:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=874 00:19:41.082 10:00:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 874 -ge 100 ']' 00:19:41.082 10:00:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:19:41.082 10:00:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:19:41.082 10:00:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:19:41.082 10:00:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:19:41.082 10:00:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.082 10:00:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:41.082 [2024-07-15 10:00:54.491360] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e310 is same with the state(5) to be set 00:19:41.082 [2024-07-15 10:00:54.491445] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e310 is same with the state(5) to be set 00:19:41.082 [2024-07-15 10:00:54.491453] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e310 is same with the state(5) to be set 00:19:41.082 [2024-07-15 10:00:54.491459] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e310 is same with the state(5) to be set 00:19:41.082 [2024-07-15 10:00:54.491465] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e310 is same with the state(5) to be set 00:19:41.082 [2024-07-15 10:00:54.491471] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e310 is same with the state(5) to be set 00:19:41.083 [2024-07-15 10:00:54.491477] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e310 is same with the state(5) to be set 00:19:41.083 [2024-07-15 10:00:54.491483] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e310 is same with the state(5) to be set 00:19:41.083 [2024-07-15 10:00:54.491489] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e310 is same with the state(5) to be set 00:19:41.083 [2024-07-15 10:00:54.491494] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e310 is same with the state(5) to be set 00:19:41.083 [2024-07-15 10:00:54.491500] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e310 is same with the state(5) to be set 00:19:41.083 [2024-07-15 10:00:54.491506] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e310 is same with the state(5) to be set 00:19:41.083 [2024-07-15 10:00:54.491511] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e310 is same with the state(5) to be set 00:19:41.083 [2024-07-15 10:00:54.491516] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e310 is same with the state(5) to be set 00:19:41.083 [2024-07-15 10:00:54.491522] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e310 is same with the state(5) to be set 00:19:41.083 [2024-07-15 10:00:54.491528] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e310 is same with the state(5) to be set 00:19:41.083 [2024-07-15 10:00:54.491534] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e310 is same with the state(5) to be set 00:19:41.083 [2024-07-15 10:00:54.491540] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e310 is same with the state(5) to be set 00:19:41.083 [2024-07-15 10:00:54.491545] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e310 is same with the state(5) to be set 00:19:41.083 [2024-07-15 10:00:54.491551] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e310 is same with the state(5) to be set 00:19:41.083 [2024-07-15 10:00:54.491556] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e310 is same with the state(5) to be set 00:19:41.083 [2024-07-15 10:00:54.491562] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e310 is same with the state(5) to be set 00:19:41.083 [2024-07-15 10:00:54.491567] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e310 is same with the state(5) to be set 00:19:41.083 [2024-07-15 10:00:54.491572] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e310 is same with the state(5) to be set 00:19:41.083 [2024-07-15 10:00:54.491578] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e310 is same with the state(5) to be set 00:19:41.083 [2024-07-15 10:00:54.491583] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e310 is same with the state(5) to be set 00:19:41.083 [2024-07-15 10:00:54.491588] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e310 is same with the state(5) to be set 00:19:41.083 [2024-07-15 10:00:54.491594] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e310 is same with the state(5) to be set 00:19:41.083 [2024-07-15 10:00:54.491599] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e310 is same with the state(5) to be set 00:19:41.083 [2024-07-15 10:00:54.491605] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e310 is same with the state(5) to be set 00:19:41.083 [2024-07-15 10:00:54.491610] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e310 is same with the state(5) to be set 00:19:41.083 [2024-07-15 10:00:54.491616] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e310 is same with the state(5) to be set 00:19:41.083 [2024-07-15 10:00:54.491622] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e310 is same with the state(5) to be set 00:19:41.083 [2024-07-15 10:00:54.491627] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e310 is same with the state(5) to be set 00:19:41.083 [2024-07-15 10:00:54.491633] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e310 is same with the state(5) to be set 00:19:41.083 [2024-07-15 10:00:54.491638] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e310 is same with the state(5) to be set 00:19:41.083 [2024-07-15 10:00:54.491644] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e310 is same with the state(5) to be set 00:19:41.083 [2024-07-15 10:00:54.491650] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e310 is same with the state(5) to be set 00:19:41.083 [2024-07-15 10:00:54.491656] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e310 is same with the state(5) to be set 00:19:41.083 [2024-07-15 10:00:54.491673] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e310 is same with the state(5) to be set 00:19:41.083 [2024-07-15 10:00:54.491679] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e310 is same with the state(5) to be set 00:19:41.083 [2024-07-15 10:00:54.491684] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e310 is same with the state(5) to be set 00:19:41.083 [2024-07-15 10:00:54.491690] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e310 is same with the state(5) to be set 00:19:41.083 [2024-07-15 10:00:54.491695] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e310 is same with the state(5) to be set 00:19:41.083 [2024-07-15 10:00:54.491701] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e310 is same with the state(5) to be set 00:19:41.083 [2024-07-15 10:00:54.491706] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e310 is same with the state(5) to be set 00:19:41.083 [2024-07-15 10:00:54.491712] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e310 is same with the state(5) to be set 00:19:41.083 [2024-07-15 10:00:54.491717] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e310 is same with the state(5) to be set 00:19:41.083 [2024-07-15 10:00:54.491722] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e310 is same with the state(5) to be set 00:19:41.083 [2024-07-15 10:00:54.491728] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e310 is same with the state(5) to be set 00:19:41.083 [2024-07-15 10:00:54.491733] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e310 is same with the state(5) to be set 00:19:41.083 [2024-07-15 10:00:54.491739] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e310 is same with the state(5) to be set 00:19:41.083 [2024-07-15 10:00:54.491745] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e310 is same with the state(5) to be set 00:19:41.083 [2024-07-15 10:00:54.491751] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e310 is same with the state(5) to be set 00:19:41.083 [2024-07-15 10:00:54.491756] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e310 is same with the state(5) to be set 00:19:41.083 [2024-07-15 10:00:54.491762] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e310 is same with the state(5) to be set 00:19:41.083 [2024-07-15 10:00:54.491768] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e310 is same with the state(5) to be set 00:19:41.083 [2024-07-15 10:00:54.491774] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e310 is same with the state(5) to be set 00:19:41.083 [2024-07-15 10:00:54.491779] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e310 is same with the state(5) to be set 00:19:41.083 [2024-07-15 10:00:54.491784] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e310 is same with the state(5) to be set 00:19:41.083 [2024-07-15 10:00:54.491789] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e310 is same with the state(5) to be set 00:19:41.083 [2024-07-15 10:00:54.491795] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56e310 is same with the state(5) to be set 00:19:41.083 [2024-07-15 10:00:54.491912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.083 [2024-07-15 10:00:54.491953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.083 [2024-07-15 10:00:54.491978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:114816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.083 [2024-07-15 10:00:54.491986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.083 [2024-07-15 10:00:54.491995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:114944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.083 [2024-07-15 10:00:54.492002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.083 [2024-07-15 10:00:54.492011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:115072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.083 [2024-07-15 10:00:54.492017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.083 [2024-07-15 10:00:54.492026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:115200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.083 [2024-07-15 10:00:54.492033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.083 [2024-07-15 10:00:54.492041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:115328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.083 [2024-07-15 10:00:54.492048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.083 [2024-07-15 10:00:54.492056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:115456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.083 [2024-07-15 10:00:54.492064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.083 [2024-07-15 10:00:54.492072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:115584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.083 [2024-07-15 10:00:54.492078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.083 [2024-07-15 10:00:54.492087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:115712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.083 [2024-07-15 10:00:54.492093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.083 [2024-07-15 10:00:54.492101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:115840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.083 [2024-07-15 10:00:54.492107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.083 [2024-07-15 10:00:54.492115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:115968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.083 [2024-07-15 10:00:54.492122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.083 [2024-07-15 10:00:54.492130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:116096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.083 [2024-07-15 10:00:54.492136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.083 [2024-07-15 10:00:54.492145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:116224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.083 [2024-07-15 10:00:54.492151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.083 [2024-07-15 10:00:54.492159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:116352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.083 [2024-07-15 10:00:54.492165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.083 [2024-07-15 10:00:54.492174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:116480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.083 [2024-07-15 10:00:54.492196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.083 [2024-07-15 10:00:54.492210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:116608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.083 [2024-07-15 10:00:54.492216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.083 [2024-07-15 10:00:54.492225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:116736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.083 [2024-07-15 10:00:54.492232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.083 [2024-07-15 10:00:54.492240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:116864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.083 [2024-07-15 10:00:54.492246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.083 [2024-07-15 10:00:54.492254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:116992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.083 [2024-07-15 10:00:54.492261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.083 [2024-07-15 10:00:54.492269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:117120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.083 [2024-07-15 10:00:54.492275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.083 [2024-07-15 10:00:54.492284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:117248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.083 [2024-07-15 10:00:54.492291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.083 [2024-07-15 10:00:54.492299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:117376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.083 [2024-07-15 10:00:54.492306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.083 [2024-07-15 10:00:54.492314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:117504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.083 [2024-07-15 10:00:54.492320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.083 [2024-07-15 10:00:54.492328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:117632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.083 [2024-07-15 10:00:54.492334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.083 [2024-07-15 10:00:54.492342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:117760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.083 [2024-07-15 10:00:54.492350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.083 [2024-07-15 10:00:54.492358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.083 [2024-07-15 10:00:54.492364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.083 [2024-07-15 10:00:54.492372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:118016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.083 [2024-07-15 10:00:54.492379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.083 [2024-07-15 10:00:54.492387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:118144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.083 [2024-07-15 10:00:54.492393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.083 [2024-07-15 10:00:54.492401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:118272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.083 [2024-07-15 10:00:54.492408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.083 [2024-07-15 10:00:54.492416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:118400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.083 [2024-07-15 10:00:54.492422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.083 [2024-07-15 10:00:54.492430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:118528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.083 [2024-07-15 10:00:54.492438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.083 [2024-07-15 10:00:54.492447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:118656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.083 [2024-07-15 10:00:54.492454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.083 [2024-07-15 10:00:54.492463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:118784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.083 [2024-07-15 10:00:54.492469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.083 [2024-07-15 10:00:54.492478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:118912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.083 [2024-07-15 10:00:54.492484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.083 [2024-07-15 10:00:54.492492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:119040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.083 [2024-07-15 10:00:54.492499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.083 [2024-07-15 10:00:54.492507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:119168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.083 [2024-07-15 10:00:54.492513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.083 [2024-07-15 10:00:54.492527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:119296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.083 [2024-07-15 10:00:54.492534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.083 [2024-07-15 10:00:54.492543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:119424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.083 [2024-07-15 10:00:54.492549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.083 [2024-07-15 10:00:54.492558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:119552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.083 [2024-07-15 10:00:54.492565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.083 [2024-07-15 10:00:54.492573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:119680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.083 [2024-07-15 10:00:54.492580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.083 [2024-07-15 10:00:54.492588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:119808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.083 [2024-07-15 10:00:54.492595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.083 [2024-07-15 10:00:54.492603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:119936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.083 [2024-07-15 10:00:54.492613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.083 [2024-07-15 10:00:54.492626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:120064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.083 [2024-07-15 10:00:54.492636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.083 [2024-07-15 10:00:54.492645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.083 [2024-07-15 10:00:54.492652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.083 [2024-07-15 10:00:54.492673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:120320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.083 [2024-07-15 10:00:54.492681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.083 [2024-07-15 10:00:54.492689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:120448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.083 [2024-07-15 10:00:54.492696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.083 [2024-07-15 10:00:54.492704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:120576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.083 [2024-07-15 10:00:54.492711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.083 [2024-07-15 10:00:54.492721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:120704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.083 [2024-07-15 10:00:54.492728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.083 [2024-07-15 10:00:54.492736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.083 [2024-07-15 10:00:54.492743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.083 [2024-07-15 10:00:54.492751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.083 [2024-07-15 10:00:54.492757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.083 [2024-07-15 10:00:54.492766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:121088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.083 [2024-07-15 10:00:54.492773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.083 [2024-07-15 10:00:54.492785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:121216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.083 [2024-07-15 10:00:54.492795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.083 [2024-07-15 10:00:54.492810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:121344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.083 [2024-07-15 10:00:54.492818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.083 [2024-07-15 10:00:54.492829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:121472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.083 [2024-07-15 10:00:54.492839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.083 [2024-07-15 10:00:54.492853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:121600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.083 [2024-07-15 10:00:54.492861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.083 [2024-07-15 10:00:54.492871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:121728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.084 [2024-07-15 10:00:54.492878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.084 [2024-07-15 10:00:54.492886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:121856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.084 [2024-07-15 10:00:54.492892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.084 [2024-07-15 10:00:54.492900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:121984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.084 [2024-07-15 10:00:54.492907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.084 [2024-07-15 10:00:54.492916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:122112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.084 [2024-07-15 10:00:54.492923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.084 [2024-07-15 10:00:54.492931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:122240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.084 [2024-07-15 10:00:54.492937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.084 [2024-07-15 10:00:54.492946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:122368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.084 [2024-07-15 10:00:54.492952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.084 [2024-07-15 10:00:54.492961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:122496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.084 [2024-07-15 10:00:54.492967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.084 [2024-07-15 10:00:54.492975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:122624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.084 [2024-07-15 10:00:54.492981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.084 [2024-07-15 10:00:54.492992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:122752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.084 [2024-07-15 10:00:54.492999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.084 [2024-07-15 10:00:54.493007] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb6820 is same with the state(5) to be set 00:19:41.084 [2024-07-15 10:00:54.493074] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xcb6820 was disconnected and freed. reset controller. 00:19:41.084 10:00:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.084 10:00:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:19:41.084 [2024-07-15 10:00:54.494277] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:41.084 10:00:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.084 10:00:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:41.084 task offset: 114688 on job bdev=Nvme0n1 fails 00:19:41.084 00:19:41.084 Latency(us) 00:19:41.084 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:41.084 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:41.084 Job: Nvme0n1 ended in about 0.55 seconds with error 00:19:41.084 Verification LBA range: start 0x0 length 0x400 00:19:41.084 Nvme0n1 : 0.55 1622.29 101.39 115.88 0.00 35891.17 4464.46 32968.33 00:19:41.084 =================================================================================================================== 00:19:41.084 Total : 1622.29 101.39 115.88 0.00 35891.17 4464.46 32968.33 00:19:41.084 [2024-07-15 10:00:54.496679] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:41.084 [2024-07-15 10:00:54.496716] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb6af0 (9): Bad file descriptor 00:19:41.084 [2024-07-15 10:00:54.500565] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:41.084 10:00:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.084 10:00:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:19:42.025 10:00:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 72972 00:19:42.025 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (72972) - No such process 00:19:42.025 10:00:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:19:42.025 10:00:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:19:42.025 10:00:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:19:42.025 10:00:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:19:42.025 10:00:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:19:42.025 10:00:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:19:42.025 10:00:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:42.025 10:00:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:42.025 { 00:19:42.025 "params": { 00:19:42.025 "name": "Nvme$subsystem", 00:19:42.026 "trtype": "$TEST_TRANSPORT", 00:19:42.026 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:42.026 "adrfam": "ipv4", 00:19:42.026 "trsvcid": "$NVMF_PORT", 00:19:42.026 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:42.026 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:42.026 "hdgst": ${hdgst:-false}, 00:19:42.026 "ddgst": ${ddgst:-false} 00:19:42.026 }, 00:19:42.026 "method": "bdev_nvme_attach_controller" 00:19:42.026 } 00:19:42.026 EOF 00:19:42.026 )") 00:19:42.026 10:00:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:19:42.026 10:00:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:19:42.026 10:00:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:19:42.026 10:00:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:42.026 "params": { 00:19:42.026 "name": "Nvme0", 00:19:42.026 "trtype": "tcp", 00:19:42.026 "traddr": "10.0.0.2", 00:19:42.026 "adrfam": "ipv4", 00:19:42.026 "trsvcid": "4420", 00:19:42.026 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:42.026 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:42.026 "hdgst": false, 00:19:42.026 "ddgst": false 00:19:42.026 }, 00:19:42.026 "method": "bdev_nvme_attach_controller" 00:19:42.026 }' 00:19:42.026 [2024-07-15 10:00:55.559865] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:19:42.026 [2024-07-15 10:00:55.559967] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73022 ] 00:19:42.305 [2024-07-15 10:00:55.704823] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.305 [2024-07-15 10:00:55.818145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:42.577 Running I/O for 1 seconds... 00:19:43.510 00:19:43.510 Latency(us) 00:19:43.510 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:43.510 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:43.510 Verification LBA range: start 0x0 length 0x400 00:19:43.510 Nvme0n1 : 1.02 1757.27 109.83 0.00 0.00 35744.67 5437.48 32510.43 00:19:43.510 =================================================================================================================== 00:19:43.510 Total : 1757.27 109.83 0.00 0.00 35744.67 5437.48 32510.43 00:19:43.769 10:00:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:19:43.769 10:00:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:19:43.769 10:00:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:19:43.769 10:00:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:19:43.769 10:00:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:19:43.769 10:00:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:43.769 10:00:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:19:43.769 10:00:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:43.769 10:00:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:19:43.769 10:00:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:43.769 10:00:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:43.769 rmmod nvme_tcp 00:19:43.769 rmmod nvme_fabrics 00:19:43.769 rmmod nvme_keyring 00:19:43.769 10:00:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:43.769 10:00:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:19:43.769 10:00:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:19:43.769 10:00:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 72898 ']' 00:19:43.769 10:00:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 72898 00:19:43.769 10:00:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 72898 ']' 00:19:43.769 10:00:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 72898 00:19:43.769 10:00:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:19:43.769 10:00:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:43.769 10:00:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72898 00:19:43.769 10:00:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:43.769 10:00:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:43.769 10:00:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72898' 00:19:43.769 killing process with pid 72898 00:19:43.769 10:00:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 72898 00:19:43.769 10:00:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 72898 00:19:44.027 [2024-07-15 10:00:57.527516] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:19:44.027 10:00:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:44.027 10:00:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:44.027 10:00:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:44.027 10:00:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:44.027 10:00:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:44.027 10:00:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:44.027 10:00:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:44.027 10:00:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:44.027 10:00:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:44.027 10:00:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:19:44.027 00:19:44.027 real 0m5.863s 00:19:44.027 user 0m22.503s 00:19:44.027 sys 0m1.337s 00:19:44.027 10:00:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:44.027 10:00:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:44.027 ************************************ 00:19:44.027 END TEST nvmf_host_management 00:19:44.027 ************************************ 00:19:44.284 10:00:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:44.284 10:00:57 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:19:44.284 10:00:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:44.284 10:00:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:44.284 10:00:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:44.284 ************************************ 00:19:44.284 START TEST nvmf_lvol 00:19:44.284 ************************************ 00:19:44.284 10:00:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:19:44.284 * Looking for test storage... 00:19:44.284 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:44.284 10:00:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:44.284 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:19:44.284 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:44.284 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:44.284 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:44.284 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:44.284 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:44.284 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:44.284 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:44.284 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:44.284 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:44.284 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:44.285 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:19:44.285 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:19:44.285 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:44.285 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:44.285 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:44.285 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:44.285 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:44.285 10:00:57 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:44.285 10:00:57 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:44.285 10:00:57 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:44.285 10:00:57 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.285 10:00:57 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.285 10:00:57 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.285 10:00:57 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:19:44.285 10:00:57 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.285 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:19:44.285 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:44.285 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:44.285 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:44.285 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:44.285 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:44.285 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:44.285 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:44.285 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:44.285 10:00:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:44.285 10:00:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:44.285 10:00:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:19:44.285 10:00:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:19:44.285 10:00:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:44.285 10:00:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:19:44.285 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:44.285 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:44.285 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:44.285 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:44.285 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:44.285 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:44.285 10:00:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:44.285 10:00:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:44.285 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:44.285 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:44.285 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:44.285 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:44.285 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:44.285 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:44.285 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:44.285 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:44.285 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:44.285 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:44.285 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:44.285 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:44.285 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:44.285 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:44.285 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:44.285 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:44.285 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:44.285 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:44.285 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:44.285 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:44.285 Cannot find device "nvmf_tgt_br" 00:19:44.543 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:19:44.543 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:44.543 Cannot find device "nvmf_tgt_br2" 00:19:44.543 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:19:44.543 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:44.543 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:44.543 Cannot find device "nvmf_tgt_br" 00:19:44.543 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:19:44.543 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:44.543 Cannot find device "nvmf_tgt_br2" 00:19:44.543 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:19:44.543 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:44.543 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:44.543 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:44.543 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:44.543 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:19:44.543 10:00:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:44.543 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:44.543 10:00:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:19:44.543 10:00:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:44.543 10:00:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:44.543 10:00:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:44.543 10:00:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:44.543 10:00:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:44.543 10:00:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:44.543 10:00:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:44.543 10:00:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:44.543 10:00:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:44.543 10:00:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:44.543 10:00:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:44.543 10:00:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:44.543 10:00:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:44.543 10:00:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:44.543 10:00:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:44.543 10:00:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:44.543 10:00:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:44.543 10:00:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:44.543 10:00:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:44.543 10:00:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:44.543 10:00:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:44.543 10:00:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:44.802 10:00:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:44.802 10:00:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:44.802 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:44.802 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:19:44.802 00:19:44.802 --- 10.0.0.2 ping statistics --- 00:19:44.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:44.802 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:19:44.802 10:00:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:44.802 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:44.802 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:19:44.802 00:19:44.802 --- 10.0.0.3 ping statistics --- 00:19:44.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:44.802 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:19:44.802 10:00:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:44.802 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:44.802 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:19:44.802 00:19:44.802 --- 10.0.0.1 ping statistics --- 00:19:44.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:44.802 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:19:44.802 10:00:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:44.802 10:00:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:19:44.802 10:00:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:44.802 10:00:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:44.802 10:00:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:44.802 10:00:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:44.802 10:00:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:44.802 10:00:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:44.802 10:00:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:44.802 10:00:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:19:44.802 10:00:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:44.802 10:00:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:44.802 10:00:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:19:44.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:44.802 10:00:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=73227 00:19:44.802 10:00:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:19:44.802 10:00:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 73227 00:19:44.802 10:00:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 73227 ']' 00:19:44.802 10:00:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:44.802 10:00:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:44.802 10:00:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:44.802 10:00:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:44.802 10:00:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:19:44.802 [2024-07-15 10:00:58.203392] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:19:44.802 [2024-07-15 10:00:58.203478] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:44.802 [2024-07-15 10:00:58.343360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:45.061 [2024-07-15 10:00:58.451586] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:45.061 [2024-07-15 10:00:58.451644] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:45.061 [2024-07-15 10:00:58.451654] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:45.061 [2024-07-15 10:00:58.451670] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:45.061 [2024-07-15 10:00:58.451676] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:45.061 [2024-07-15 10:00:58.451936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:45.061 [2024-07-15 10:00:58.451981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:45.061 [2024-07-15 10:00:58.451984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:45.631 10:00:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:45.631 10:00:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:19:45.631 10:00:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:45.631 10:00:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:45.631 10:00:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:19:45.631 10:00:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:45.631 10:00:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:45.889 [2024-07-15 10:00:59.375503] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:45.889 10:00:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:46.149 10:00:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:19:46.149 10:00:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:46.409 10:00:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:19:46.409 10:00:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:19:46.667 10:01:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:19:46.924 10:01:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=1cca335d-fb64-4fbc-b00f-447728fcc28c 00:19:46.924 10:01:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 1cca335d-fb64-4fbc-b00f-447728fcc28c lvol 20 00:19:47.183 10:01:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=446f3c76-f2e2-43de-b06c-ed7a01afbce9 00:19:47.183 10:01:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:19:47.442 10:01:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 446f3c76-f2e2-43de-b06c-ed7a01afbce9 00:19:47.701 10:01:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:47.960 [2024-07-15 10:01:01.322077] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:47.960 10:01:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:48.218 10:01:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:19:48.218 10:01:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=73379 00:19:48.218 10:01:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:19:49.158 10:01:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 446f3c76-f2e2-43de-b06c-ed7a01afbce9 MY_SNAPSHOT 00:19:49.417 10:01:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=31932298-18c6-4602-b9cd-8be429c409f9 00:19:49.417 10:01:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 446f3c76-f2e2-43de-b06c-ed7a01afbce9 30 00:19:49.984 10:01:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 31932298-18c6-4602-b9cd-8be429c409f9 MY_CLONE 00:19:50.242 10:01:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=c0f8de08-872f-488e-ad85-4e6db5a76d7a 00:19:50.242 10:01:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate c0f8de08-872f-488e-ad85-4e6db5a76d7a 00:19:50.809 10:01:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 73379 00:19:59.046 Initializing NVMe Controllers 00:19:59.046 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:19:59.046 Controller IO queue size 128, less than required. 00:19:59.046 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:59.046 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:19:59.046 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:19:59.046 Initialization complete. Launching workers. 00:19:59.046 ======================================================== 00:19:59.046 Latency(us) 00:19:59.046 Device Information : IOPS MiB/s Average min max 00:19:59.046 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10364.45 40.49 12350.59 2382.57 74361.46 00:19:59.046 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10580.04 41.33 12100.07 1208.16 75184.11 00:19:59.046 ======================================================== 00:19:59.046 Total : 20944.48 81.81 12224.04 1208.16 75184.11 00:19:59.046 00:19:59.046 10:01:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:59.046 10:01:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 446f3c76-f2e2-43de-b06c-ed7a01afbce9 00:19:59.046 10:01:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1cca335d-fb64-4fbc-b00f-447728fcc28c 00:19:59.306 10:01:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:19:59.307 10:01:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:19:59.307 10:01:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:19:59.307 10:01:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:59.307 10:01:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:19:59.307 10:01:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:59.307 10:01:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:19:59.307 10:01:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:59.307 10:01:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:59.307 rmmod nvme_tcp 00:19:59.307 rmmod nvme_fabrics 00:19:59.307 rmmod nvme_keyring 00:19:59.307 10:01:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:59.307 10:01:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:19:59.307 10:01:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:19:59.307 10:01:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 73227 ']' 00:19:59.307 10:01:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 73227 00:19:59.307 10:01:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 73227 ']' 00:19:59.307 10:01:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 73227 00:19:59.307 10:01:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:19:59.307 10:01:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:59.307 10:01:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73227 00:19:59.307 10:01:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:59.307 10:01:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:59.307 killing process with pid 73227 00:19:59.307 10:01:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73227' 00:19:59.307 10:01:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 73227 00:19:59.307 10:01:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 73227 00:19:59.566 10:01:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:59.566 10:01:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:59.566 10:01:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:59.566 10:01:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:59.566 10:01:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:59.566 10:01:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:59.566 10:01:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:59.566 10:01:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:59.566 10:01:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:59.566 00:19:59.566 real 0m15.418s 00:19:59.566 user 1m5.262s 00:19:59.566 sys 0m3.209s 00:19:59.566 10:01:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:59.566 10:01:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:19:59.566 ************************************ 00:19:59.566 END TEST nvmf_lvol 00:19:59.566 ************************************ 00:19:59.566 10:01:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:59.566 10:01:13 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:19:59.566 10:01:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:59.566 10:01:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:59.566 10:01:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:59.826 ************************************ 00:19:59.826 START TEST nvmf_lvs_grow 00:19:59.826 ************************************ 00:19:59.826 10:01:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:19:59.826 * Looking for test storage... 00:19:59.826 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:59.826 10:01:13 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:59.826 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:19:59.826 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:59.826 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:59.826 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:59.826 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:59.826 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:59.826 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:59.826 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:59.826 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:59.826 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:59.826 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:59.826 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:19:59.826 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:19:59.826 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:59.826 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:59.826 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:59.826 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:59.826 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:59.826 10:01:13 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:59.826 10:01:13 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:59.826 10:01:13 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:59.827 10:01:13 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.827 10:01:13 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.827 10:01:13 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.827 10:01:13 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:19:59.827 10:01:13 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.827 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:19:59.827 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:59.827 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:59.827 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:59.827 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:59.827 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:59.827 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:59.827 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:59.827 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:59.827 10:01:13 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:59.827 10:01:13 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:59.827 10:01:13 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:19:59.827 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:59.827 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:59.827 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:59.827 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:59.827 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:59.827 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:59.827 10:01:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:59.827 10:01:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:59.827 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:59.827 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:59.827 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:59.827 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:59.827 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:59.827 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:59.827 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:59.827 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:59.827 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:59.827 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:59.827 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:59.827 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:59.827 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:59.827 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:59.827 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:59.827 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:59.827 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:59.827 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:59.827 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:59.827 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:59.827 Cannot find device "nvmf_tgt_br" 00:19:59.827 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:19:59.827 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:59.827 Cannot find device "nvmf_tgt_br2" 00:19:59.827 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:19:59.827 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:59.827 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:59.827 Cannot find device "nvmf_tgt_br" 00:19:59.827 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:19:59.827 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:59.827 Cannot find device "nvmf_tgt_br2" 00:19:59.827 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:19:59.827 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:00.087 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:00.087 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:00.087 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:00.087 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:20:00.087 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:00.087 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:00.087 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:20:00.087 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:00.087 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:00.087 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:00.087 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:00.087 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:00.087 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:00.087 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:00.087 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:00.087 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:00.087 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:00.087 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:00.087 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:00.087 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:00.087 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:00.087 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:00.087 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:00.087 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:00.087 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:00.087 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:00.087 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:00.087 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:00.087 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:00.087 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:00.087 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:00.087 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:00.087 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:20:00.087 00:20:00.087 --- 10.0.0.2 ping statistics --- 00:20:00.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.087 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:20:00.087 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:00.087 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:00.087 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:20:00.087 00:20:00.087 --- 10.0.0.3 ping statistics --- 00:20:00.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.087 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:20:00.087 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:00.087 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:00.087 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 00:20:00.087 00:20:00.087 --- 10.0.0.1 ping statistics --- 00:20:00.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.087 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:20:00.087 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:00.087 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:20:00.087 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:00.087 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:00.087 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:00.087 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:00.087 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:00.087 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:00.087 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:00.087 10:01:13 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:20:00.087 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:00.087 10:01:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:00.087 10:01:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:20:00.087 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=73750 00:20:00.087 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 73750 00:20:00.087 10:01:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:00.087 10:01:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 73750 ']' 00:20:00.087 10:01:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.087 10:01:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:00.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.087 10:01:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.087 10:01:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:00.087 10:01:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:20:00.347 [2024-07-15 10:01:13.683616] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:00.347 [2024-07-15 10:01:13.683734] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:00.347 [2024-07-15 10:01:13.809069] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.605 [2024-07-15 10:01:13.937513] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:00.605 [2024-07-15 10:01:13.937572] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:00.605 [2024-07-15 10:01:13.937580] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:00.605 [2024-07-15 10:01:13.937586] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:00.605 [2024-07-15 10:01:13.937591] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:00.605 [2024-07-15 10:01:13.937619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.173 10:01:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:01.173 10:01:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:20:01.173 10:01:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:01.173 10:01:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:01.173 10:01:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:20:01.173 10:01:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:01.173 10:01:14 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:01.433 [2024-07-15 10:01:14.840084] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:01.433 10:01:14 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:20:01.433 10:01:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:01.433 10:01:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:01.433 10:01:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:20:01.433 ************************************ 00:20:01.433 START TEST lvs_grow_clean 00:20:01.433 ************************************ 00:20:01.433 10:01:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:20:01.433 10:01:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:20:01.433 10:01:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:20:01.433 10:01:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:20:01.433 10:01:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:20:01.433 10:01:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:20:01.433 10:01:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:20:01.433 10:01:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:20:01.433 10:01:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:20:01.433 10:01:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:20:01.692 10:01:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:20:01.692 10:01:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:20:01.952 10:01:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=bccd20f8-f9ea-46a2-b5c3-ee29ab2c6c76 00:20:01.952 10:01:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:20:01.952 10:01:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bccd20f8-f9ea-46a2-b5c3-ee29ab2c6c76 00:20:02.211 10:01:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:20:02.211 10:01:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:20:02.211 10:01:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u bccd20f8-f9ea-46a2-b5c3-ee29ab2c6c76 lvol 150 00:20:02.471 10:01:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=b9be42e6-a35b-4c76-92a5-fd7608cfc602 00:20:02.471 10:01:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:20:02.471 10:01:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:20:02.730 [2024-07-15 10:01:16.102594] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:20:02.730 [2024-07-15 10:01:16.102685] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:20:02.730 true 00:20:02.730 10:01:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:20:02.730 10:01:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bccd20f8-f9ea-46a2-b5c3-ee29ab2c6c76 00:20:02.992 10:01:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:20:02.992 10:01:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:20:03.252 10:01:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b9be42e6-a35b-4c76-92a5-fd7608cfc602 00:20:03.252 10:01:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:03.511 [2024-07-15 10:01:17.018161] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:03.511 10:01:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:03.771 10:01:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:20:03.771 10:01:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=73907 00:20:03.771 10:01:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:03.771 10:01:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 73907 /var/tmp/bdevperf.sock 00:20:03.771 10:01:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 73907 ']' 00:20:03.771 10:01:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:03.771 10:01:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:03.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:03.771 10:01:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:03.771 10:01:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:03.771 10:01:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:20:03.771 [2024-07-15 10:01:17.316905] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:03.771 [2024-07-15 10:01:17.316974] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73907 ] 00:20:04.035 [2024-07-15 10:01:17.447021] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.035 [2024-07-15 10:01:17.557808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:04.974 10:01:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:04.974 10:01:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:20:04.974 10:01:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:20:04.974 Nvme0n1 00:20:05.231 10:01:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:20:05.231 [ 00:20:05.231 { 00:20:05.231 "aliases": [ 00:20:05.231 "b9be42e6-a35b-4c76-92a5-fd7608cfc602" 00:20:05.231 ], 00:20:05.231 "assigned_rate_limits": { 00:20:05.231 "r_mbytes_per_sec": 0, 00:20:05.231 "rw_ios_per_sec": 0, 00:20:05.231 "rw_mbytes_per_sec": 0, 00:20:05.231 "w_mbytes_per_sec": 0 00:20:05.231 }, 00:20:05.231 "block_size": 4096, 00:20:05.231 "claimed": false, 00:20:05.231 "driver_specific": { 00:20:05.232 "mp_policy": "active_passive", 00:20:05.232 "nvme": [ 00:20:05.232 { 00:20:05.232 "ctrlr_data": { 00:20:05.232 "ana_reporting": false, 00:20:05.232 "cntlid": 1, 00:20:05.232 "firmware_revision": "24.09", 00:20:05.232 "model_number": "SPDK bdev Controller", 00:20:05.232 "multi_ctrlr": true, 00:20:05.232 "oacs": { 00:20:05.232 "firmware": 0, 00:20:05.232 "format": 0, 00:20:05.232 "ns_manage": 0, 00:20:05.232 "security": 0 00:20:05.232 }, 00:20:05.232 "serial_number": "SPDK0", 00:20:05.232 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:05.232 "vendor_id": "0x8086" 00:20:05.232 }, 00:20:05.232 "ns_data": { 00:20:05.232 "can_share": true, 00:20:05.232 "id": 1 00:20:05.232 }, 00:20:05.232 "trid": { 00:20:05.232 "adrfam": "IPv4", 00:20:05.232 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:05.232 "traddr": "10.0.0.2", 00:20:05.232 "trsvcid": "4420", 00:20:05.232 "trtype": "TCP" 00:20:05.232 }, 00:20:05.232 "vs": { 00:20:05.232 "nvme_version": "1.3" 00:20:05.232 } 00:20:05.232 } 00:20:05.232 ] 00:20:05.232 }, 00:20:05.232 "memory_domains": [ 00:20:05.232 { 00:20:05.232 "dma_device_id": "system", 00:20:05.232 "dma_device_type": 1 00:20:05.232 } 00:20:05.232 ], 00:20:05.232 "name": "Nvme0n1", 00:20:05.232 "num_blocks": 38912, 00:20:05.232 "product_name": "NVMe disk", 00:20:05.232 "supported_io_types": { 00:20:05.232 "abort": true, 00:20:05.232 "compare": true, 00:20:05.232 "compare_and_write": true, 00:20:05.232 "copy": true, 00:20:05.232 "flush": true, 00:20:05.232 "get_zone_info": false, 00:20:05.232 "nvme_admin": true, 00:20:05.232 "nvme_io": true, 00:20:05.232 "nvme_io_md": false, 00:20:05.232 "nvme_iov_md": false, 00:20:05.232 "read": true, 00:20:05.232 "reset": true, 00:20:05.232 "seek_data": false, 00:20:05.232 "seek_hole": false, 00:20:05.232 "unmap": true, 00:20:05.232 "write": true, 00:20:05.232 "write_zeroes": true, 00:20:05.232 "zcopy": false, 00:20:05.232 "zone_append": false, 00:20:05.232 "zone_management": false 00:20:05.232 }, 00:20:05.232 "uuid": "b9be42e6-a35b-4c76-92a5-fd7608cfc602", 00:20:05.232 "zoned": false 00:20:05.232 } 00:20:05.232 ] 00:20:05.232 10:01:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:05.232 10:01:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=73956 00:20:05.232 10:01:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:20:05.490 Running I/O for 10 seconds... 00:20:06.427 Latency(us) 00:20:06.427 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.427 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:06.427 Nvme0n1 : 1.00 10444.00 40.80 0.00 0.00 0.00 0.00 0.00 00:20:06.427 =================================================================================================================== 00:20:06.427 Total : 10444.00 40.80 0.00 0.00 0.00 0.00 0.00 00:20:06.427 00:20:07.362 10:01:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u bccd20f8-f9ea-46a2-b5c3-ee29ab2c6c76 00:20:07.362 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:07.362 Nvme0n1 : 2.00 10313.00 40.29 0.00 0.00 0.00 0.00 0.00 00:20:07.362 =================================================================================================================== 00:20:07.362 Total : 10313.00 40.29 0.00 0.00 0.00 0.00 0.00 00:20:07.362 00:20:07.620 true 00:20:07.620 10:01:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bccd20f8-f9ea-46a2-b5c3-ee29ab2c6c76 00:20:07.620 10:01:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:20:07.879 10:01:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:20:07.879 10:01:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:20:07.879 10:01:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 73956 00:20:08.446 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:08.446 Nvme0n1 : 3.00 10207.00 39.87 0.00 0.00 0.00 0.00 0.00 00:20:08.446 =================================================================================================================== 00:20:08.446 Total : 10207.00 39.87 0.00 0.00 0.00 0.00 0.00 00:20:08.446 00:20:09.386 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:09.386 Nvme0n1 : 4.00 10112.25 39.50 0.00 0.00 0.00 0.00 0.00 00:20:09.386 =================================================================================================================== 00:20:09.386 Total : 10112.25 39.50 0.00 0.00 0.00 0.00 0.00 00:20:09.386 00:20:10.323 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:10.323 Nvme0n1 : 5.00 10049.40 39.26 0.00 0.00 0.00 0.00 0.00 00:20:10.323 =================================================================================================================== 00:20:10.323 Total : 10049.40 39.26 0.00 0.00 0.00 0.00 0.00 00:20:10.323 00:20:11.700 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:11.700 Nvme0n1 : 6.00 10006.17 39.09 0.00 0.00 0.00 0.00 0.00 00:20:11.700 =================================================================================================================== 00:20:11.700 Total : 10006.17 39.09 0.00 0.00 0.00 0.00 0.00 00:20:11.700 00:20:12.330 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:12.330 Nvme0n1 : 7.00 9972.57 38.96 0.00 0.00 0.00 0.00 0.00 00:20:12.330 =================================================================================================================== 00:20:12.330 Total : 9972.57 38.96 0.00 0.00 0.00 0.00 0.00 00:20:12.330 00:20:13.708 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:13.708 Nvme0n1 : 8.00 9932.12 38.80 0.00 0.00 0.00 0.00 0.00 00:20:13.708 =================================================================================================================== 00:20:13.708 Total : 9932.12 38.80 0.00 0.00 0.00 0.00 0.00 00:20:13.708 00:20:14.300 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:14.300 Nvme0n1 : 9.00 9908.22 38.70 0.00 0.00 0.00 0.00 0.00 00:20:14.300 =================================================================================================================== 00:20:14.300 Total : 9908.22 38.70 0.00 0.00 0.00 0.00 0.00 00:20:14.300 00:20:15.701 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:15.701 Nvme0n1 : 10.00 9910.90 38.71 0.00 0.00 0.00 0.00 0.00 00:20:15.701 =================================================================================================================== 00:20:15.701 Total : 9910.90 38.71 0.00 0.00 0.00 0.00 0.00 00:20:15.701 00:20:15.701 00:20:15.701 Latency(us) 00:20:15.701 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.702 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:15.702 Nvme0n1 : 10.01 9916.51 38.74 0.00 0.00 12902.90 4063.80 26672.29 00:20:15.702 =================================================================================================================== 00:20:15.702 Total : 9916.51 38.74 0.00 0.00 12902.90 4063.80 26672.29 00:20:15.702 0 00:20:15.702 10:01:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 73907 00:20:15.702 10:01:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 73907 ']' 00:20:15.702 10:01:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 73907 00:20:15.702 10:01:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:20:15.702 10:01:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:15.702 10:01:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73907 00:20:15.702 10:01:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:15.702 10:01:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:15.702 killing process with pid 73907 00:20:15.702 10:01:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73907' 00:20:15.702 10:01:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 73907 00:20:15.702 Received shutdown signal, test time was about 10.000000 seconds 00:20:15.702 00:20:15.702 Latency(us) 00:20:15.702 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.702 =================================================================================================================== 00:20:15.702 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:15.702 10:01:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 73907 00:20:15.702 10:01:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:15.960 10:01:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:16.217 10:01:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bccd20f8-f9ea-46a2-b5c3-ee29ab2c6c76 00:20:16.217 10:01:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:20:16.217 10:01:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:20:16.217 10:01:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:20:16.217 10:01:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:20:16.473 [2024-07-15 10:01:29.972392] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:20:16.473 10:01:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bccd20f8-f9ea-46a2-b5c3-ee29ab2c6c76 00:20:16.473 10:01:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:20:16.473 10:01:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bccd20f8-f9ea-46a2-b5c3-ee29ab2c6c76 00:20:16.473 10:01:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:16.473 10:01:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:16.473 10:01:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:16.473 10:01:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:16.473 10:01:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:16.473 10:01:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:16.473 10:01:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:16.473 10:01:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:20:16.473 10:01:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bccd20f8-f9ea-46a2-b5c3-ee29ab2c6c76 00:20:16.729 2024/07/15 10:01:30 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:bccd20f8-f9ea-46a2-b5c3-ee29ab2c6c76], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:20:16.729 request: 00:20:16.729 { 00:20:16.729 "method": "bdev_lvol_get_lvstores", 00:20:16.729 "params": { 00:20:16.729 "uuid": "bccd20f8-f9ea-46a2-b5c3-ee29ab2c6c76" 00:20:16.729 } 00:20:16.729 } 00:20:16.729 Got JSON-RPC error response 00:20:16.729 GoRPCClient: error on JSON-RPC call 00:20:16.729 10:01:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:20:16.729 10:01:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:16.729 10:01:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:16.729 10:01:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:16.729 10:01:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:20:16.985 aio_bdev 00:20:16.985 10:01:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b9be42e6-a35b-4c76-92a5-fd7608cfc602 00:20:16.985 10:01:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=b9be42e6-a35b-4c76-92a5-fd7608cfc602 00:20:16.985 10:01:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:16.985 10:01:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:20:16.985 10:01:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:16.985 10:01:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:16.985 10:01:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:20:17.242 10:01:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b9be42e6-a35b-4c76-92a5-fd7608cfc602 -t 2000 00:20:17.501 [ 00:20:17.501 { 00:20:17.501 "aliases": [ 00:20:17.501 "lvs/lvol" 00:20:17.501 ], 00:20:17.501 "assigned_rate_limits": { 00:20:17.501 "r_mbytes_per_sec": 0, 00:20:17.501 "rw_ios_per_sec": 0, 00:20:17.501 "rw_mbytes_per_sec": 0, 00:20:17.501 "w_mbytes_per_sec": 0 00:20:17.501 }, 00:20:17.501 "block_size": 4096, 00:20:17.501 "claimed": false, 00:20:17.501 "driver_specific": { 00:20:17.501 "lvol": { 00:20:17.501 "base_bdev": "aio_bdev", 00:20:17.501 "clone": false, 00:20:17.501 "esnap_clone": false, 00:20:17.501 "lvol_store_uuid": "bccd20f8-f9ea-46a2-b5c3-ee29ab2c6c76", 00:20:17.501 "num_allocated_clusters": 38, 00:20:17.501 "snapshot": false, 00:20:17.501 "thin_provision": false 00:20:17.501 } 00:20:17.501 }, 00:20:17.501 "name": "b9be42e6-a35b-4c76-92a5-fd7608cfc602", 00:20:17.501 "num_blocks": 38912, 00:20:17.501 "product_name": "Logical Volume", 00:20:17.501 "supported_io_types": { 00:20:17.501 "abort": false, 00:20:17.501 "compare": false, 00:20:17.501 "compare_and_write": false, 00:20:17.501 "copy": false, 00:20:17.501 "flush": false, 00:20:17.501 "get_zone_info": false, 00:20:17.501 "nvme_admin": false, 00:20:17.501 "nvme_io": false, 00:20:17.501 "nvme_io_md": false, 00:20:17.501 "nvme_iov_md": false, 00:20:17.501 "read": true, 00:20:17.501 "reset": true, 00:20:17.501 "seek_data": true, 00:20:17.501 "seek_hole": true, 00:20:17.501 "unmap": true, 00:20:17.501 "write": true, 00:20:17.501 "write_zeroes": true, 00:20:17.501 "zcopy": false, 00:20:17.501 "zone_append": false, 00:20:17.501 "zone_management": false 00:20:17.501 }, 00:20:17.501 "uuid": "b9be42e6-a35b-4c76-92a5-fd7608cfc602", 00:20:17.501 "zoned": false 00:20:17.501 } 00:20:17.501 ] 00:20:17.501 10:01:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:20:17.501 10:01:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bccd20f8-f9ea-46a2-b5c3-ee29ab2c6c76 00:20:17.501 10:01:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:20:17.761 10:01:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:20:17.761 10:01:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bccd20f8-f9ea-46a2-b5c3-ee29ab2c6c76 00:20:17.761 10:01:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:20:18.019 10:01:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:20:18.019 10:01:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete b9be42e6-a35b-4c76-92a5-fd7608cfc602 00:20:18.277 10:01:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bccd20f8-f9ea-46a2-b5c3-ee29ab2c6c76 00:20:18.536 10:01:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:20:18.536 10:01:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:20:19.182 00:20:19.182 real 0m17.593s 00:20:19.182 user 0m16.995s 00:20:19.182 sys 0m2.001s 00:20:19.182 10:01:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:19.182 10:01:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:20:19.182 ************************************ 00:20:19.182 END TEST lvs_grow_clean 00:20:19.182 ************************************ 00:20:19.182 10:01:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:20:19.182 10:01:32 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:20:19.182 10:01:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:19.182 10:01:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:19.182 10:01:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:20:19.182 ************************************ 00:20:19.182 START TEST lvs_grow_dirty 00:20:19.182 ************************************ 00:20:19.182 10:01:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:20:19.182 10:01:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:20:19.182 10:01:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:20:19.182 10:01:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:20:19.182 10:01:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:20:19.182 10:01:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:20:19.182 10:01:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:20:19.182 10:01:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:20:19.182 10:01:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:20:19.182 10:01:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:20:19.442 10:01:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:20:19.442 10:01:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:20:19.701 10:01:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=8e98d36a-0bdb-4fa0-b8fb-c0e880a8fba8 00:20:19.701 10:01:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e98d36a-0bdb-4fa0-b8fb-c0e880a8fba8 00:20:19.701 10:01:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:20:19.960 10:01:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:20:19.960 10:01:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:20:19.960 10:01:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8e98d36a-0bdb-4fa0-b8fb-c0e880a8fba8 lvol 150 00:20:19.960 10:01:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=f237eaf2-770f-4d27-adb5-b4d988032336 00:20:19.960 10:01:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:20:19.960 10:01:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:20:20.219 [2024-07-15 10:01:33.675743] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:20:20.219 [2024-07-15 10:01:33.675815] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:20:20.219 true 00:20:20.219 10:01:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e98d36a-0bdb-4fa0-b8fb-c0e880a8fba8 00:20:20.219 10:01:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:20:20.480 10:01:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:20:20.480 10:01:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:20:20.740 10:01:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f237eaf2-770f-4d27-adb5-b4d988032336 00:20:20.999 10:01:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:20.999 [2024-07-15 10:01:34.522477] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:20.999 10:01:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:21.258 10:01:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=74346 00:20:21.258 10:01:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:20:21.258 10:01:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:21.258 10:01:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 74346 /var/tmp/bdevperf.sock 00:20:21.258 10:01:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 74346 ']' 00:20:21.258 10:01:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:21.258 10:01:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:21.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:21.258 10:01:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:21.258 10:01:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:21.258 10:01:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:20:21.517 [2024-07-15 10:01:34.856229] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:21.518 [2024-07-15 10:01:34.856788] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74346 ] 00:20:21.518 [2024-07-15 10:01:34.995103] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.518 [2024-07-15 10:01:35.093587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:22.456 10:01:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:22.456 10:01:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:20:22.456 10:01:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:20:22.456 Nvme0n1 00:20:22.716 10:01:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:20:22.716 [ 00:20:22.716 { 00:20:22.716 "aliases": [ 00:20:22.716 "f237eaf2-770f-4d27-adb5-b4d988032336" 00:20:22.716 ], 00:20:22.716 "assigned_rate_limits": { 00:20:22.716 "r_mbytes_per_sec": 0, 00:20:22.716 "rw_ios_per_sec": 0, 00:20:22.716 "rw_mbytes_per_sec": 0, 00:20:22.716 "w_mbytes_per_sec": 0 00:20:22.716 }, 00:20:22.716 "block_size": 4096, 00:20:22.716 "claimed": false, 00:20:22.716 "driver_specific": { 00:20:22.716 "mp_policy": "active_passive", 00:20:22.716 "nvme": [ 00:20:22.716 { 00:20:22.716 "ctrlr_data": { 00:20:22.716 "ana_reporting": false, 00:20:22.716 "cntlid": 1, 00:20:22.716 "firmware_revision": "24.09", 00:20:22.716 "model_number": "SPDK bdev Controller", 00:20:22.716 "multi_ctrlr": true, 00:20:22.716 "oacs": { 00:20:22.716 "firmware": 0, 00:20:22.716 "format": 0, 00:20:22.716 "ns_manage": 0, 00:20:22.716 "security": 0 00:20:22.716 }, 00:20:22.716 "serial_number": "SPDK0", 00:20:22.716 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:22.716 "vendor_id": "0x8086" 00:20:22.716 }, 00:20:22.716 "ns_data": { 00:20:22.716 "can_share": true, 00:20:22.716 "id": 1 00:20:22.716 }, 00:20:22.716 "trid": { 00:20:22.716 "adrfam": "IPv4", 00:20:22.716 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:22.716 "traddr": "10.0.0.2", 00:20:22.716 "trsvcid": "4420", 00:20:22.716 "trtype": "TCP" 00:20:22.716 }, 00:20:22.716 "vs": { 00:20:22.716 "nvme_version": "1.3" 00:20:22.716 } 00:20:22.716 } 00:20:22.716 ] 00:20:22.716 }, 00:20:22.716 "memory_domains": [ 00:20:22.716 { 00:20:22.716 "dma_device_id": "system", 00:20:22.716 "dma_device_type": 1 00:20:22.716 } 00:20:22.716 ], 00:20:22.716 "name": "Nvme0n1", 00:20:22.716 "num_blocks": 38912, 00:20:22.716 "product_name": "NVMe disk", 00:20:22.716 "supported_io_types": { 00:20:22.716 "abort": true, 00:20:22.716 "compare": true, 00:20:22.716 "compare_and_write": true, 00:20:22.716 "copy": true, 00:20:22.716 "flush": true, 00:20:22.716 "get_zone_info": false, 00:20:22.716 "nvme_admin": true, 00:20:22.716 "nvme_io": true, 00:20:22.716 "nvme_io_md": false, 00:20:22.716 "nvme_iov_md": false, 00:20:22.716 "read": true, 00:20:22.716 "reset": true, 00:20:22.716 "seek_data": false, 00:20:22.716 "seek_hole": false, 00:20:22.716 "unmap": true, 00:20:22.716 "write": true, 00:20:22.716 "write_zeroes": true, 00:20:22.716 "zcopy": false, 00:20:22.716 "zone_append": false, 00:20:22.716 "zone_management": false 00:20:22.716 }, 00:20:22.716 "uuid": "f237eaf2-770f-4d27-adb5-b4d988032336", 00:20:22.716 "zoned": false 00:20:22.716 } 00:20:22.716 ] 00:20:22.716 10:01:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:22.716 10:01:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=74388 00:20:22.716 10:01:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:20:22.978 Running I/O for 10 seconds... 00:20:23.922 Latency(us) 00:20:23.922 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:23.922 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:23.922 Nvme0n1 : 1.00 11089.00 43.32 0.00 0.00 0.00 0.00 0.00 00:20:23.922 =================================================================================================================== 00:20:23.922 Total : 11089.00 43.32 0.00 0.00 0.00 0.00 0.00 00:20:23.922 00:20:24.859 10:01:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8e98d36a-0bdb-4fa0-b8fb-c0e880a8fba8 00:20:24.859 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:24.859 Nvme0n1 : 2.00 10783.50 42.12 0.00 0.00 0.00 0.00 0.00 00:20:24.859 =================================================================================================================== 00:20:24.859 Total : 10783.50 42.12 0.00 0.00 0.00 0.00 0.00 00:20:24.859 00:20:25.118 true 00:20:25.118 10:01:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e98d36a-0bdb-4fa0-b8fb-c0e880a8fba8 00:20:25.118 10:01:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:20:25.377 10:01:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:20:25.377 10:01:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:20:25.377 10:01:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 74388 00:20:25.942 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:25.942 Nvme0n1 : 3.00 10748.33 41.99 0.00 0.00 0.00 0.00 0.00 00:20:25.942 =================================================================================================================== 00:20:25.942 Total : 10748.33 41.99 0.00 0.00 0.00 0.00 0.00 00:20:25.942 00:20:26.876 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:26.876 Nvme0n1 : 4.00 10644.50 41.58 0.00 0.00 0.00 0.00 0.00 00:20:26.876 =================================================================================================================== 00:20:26.876 Total : 10644.50 41.58 0.00 0.00 0.00 0.00 0.00 00:20:26.877 00:20:27.816 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:27.816 Nvme0n1 : 5.00 10583.00 41.34 0.00 0.00 0.00 0.00 0.00 00:20:27.817 =================================================================================================================== 00:20:27.817 Total : 10583.00 41.34 0.00 0.00 0.00 0.00 0.00 00:20:27.817 00:20:28.752 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:28.752 Nvme0n1 : 6.00 10527.17 41.12 0.00 0.00 0.00 0.00 0.00 00:20:28.752 =================================================================================================================== 00:20:28.752 Total : 10527.17 41.12 0.00 0.00 0.00 0.00 0.00 00:20:28.752 00:20:30.128 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:30.128 Nvme0n1 : 7.00 10484.71 40.96 0.00 0.00 0.00 0.00 0.00 00:20:30.128 =================================================================================================================== 00:20:30.128 Total : 10484.71 40.96 0.00 0.00 0.00 0.00 0.00 00:20:30.128 00:20:31.062 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:31.062 Nvme0n1 : 8.00 10266.62 40.10 0.00 0.00 0.00 0.00 0.00 00:20:31.062 =================================================================================================================== 00:20:31.062 Total : 10266.62 40.10 0.00 0.00 0.00 0.00 0.00 00:20:31.062 00:20:31.998 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:31.998 Nvme0n1 : 9.00 10235.11 39.98 0.00 0.00 0.00 0.00 0.00 00:20:31.998 =================================================================================================================== 00:20:31.998 Total : 10235.11 39.98 0.00 0.00 0.00 0.00 0.00 00:20:31.998 00:20:32.932 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:32.932 Nvme0n1 : 10.00 10200.00 39.84 0.00 0.00 0.00 0.00 0.00 00:20:32.932 =================================================================================================================== 00:20:32.932 Total : 10200.00 39.84 0.00 0.00 0.00 0.00 0.00 00:20:32.933 00:20:32.933 00:20:32.933 Latency(us) 00:20:32.933 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:32.933 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:32.933 Nvme0n1 : 10.00 10207.74 39.87 0.00 0.00 12534.97 5523.34 92952.37 00:20:32.933 =================================================================================================================== 00:20:32.933 Total : 10207.74 39.87 0.00 0.00 12534.97 5523.34 92952.37 00:20:32.933 0 00:20:32.933 10:01:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 74346 00:20:32.933 10:01:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 74346 ']' 00:20:32.933 10:01:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 74346 00:20:32.933 10:01:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:20:32.933 10:01:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:32.933 10:01:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74346 00:20:32.933 killing process with pid 74346 00:20:32.933 Received shutdown signal, test time was about 10.000000 seconds 00:20:32.933 00:20:32.933 Latency(us) 00:20:32.933 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:32.933 =================================================================================================================== 00:20:32.933 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:32.933 10:01:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:32.933 10:01:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:32.933 10:01:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74346' 00:20:32.933 10:01:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 74346 00:20:32.933 10:01:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 74346 00:20:33.190 10:01:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:33.190 10:01:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:33.448 10:01:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e98d36a-0bdb-4fa0-b8fb-c0e880a8fba8 00:20:33.448 10:01:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:20:33.706 10:01:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:20:33.706 10:01:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:20:33.706 10:01:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 73750 00:20:33.706 10:01:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 73750 00:20:33.706 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 73750 Killed "${NVMF_APP[@]}" "$@" 00:20:33.706 10:01:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:20:33.706 10:01:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:20:33.706 10:01:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:33.706 10:01:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:33.706 10:01:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:20:33.706 10:01:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=74552 00:20:33.706 10:01:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 74552 00:20:33.706 10:01:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 74552 ']' 00:20:33.706 10:01:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:33.706 10:01:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:33.706 10:01:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:33.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:33.706 10:01:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:33.706 10:01:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:33.706 10:01:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:20:33.706 [2024-07-15 10:01:47.288590] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:33.706 [2024-07-15 10:01:47.288675] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:33.964 [2024-07-15 10:01:47.425748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.964 [2024-07-15 10:01:47.529204] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:33.964 [2024-07-15 10:01:47.529252] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:33.964 [2024-07-15 10:01:47.529258] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:33.964 [2024-07-15 10:01:47.529264] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:33.964 [2024-07-15 10:01:47.529268] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:33.964 [2024-07-15 10:01:47.529288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:34.897 10:01:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:34.897 10:01:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:20:34.897 10:01:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:34.897 10:01:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:34.897 10:01:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:20:34.897 10:01:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:34.897 10:01:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:20:34.897 [2024-07-15 10:01:48.375699] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:20:34.897 [2024-07-15 10:01:48.376010] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:20:34.897 [2024-07-15 10:01:48.376136] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:20:34.897 10:01:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:20:34.897 10:01:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev f237eaf2-770f-4d27-adb5-b4d988032336 00:20:34.897 10:01:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=f237eaf2-770f-4d27-adb5-b4d988032336 00:20:34.897 10:01:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:34.897 10:01:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:20:34.897 10:01:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:34.897 10:01:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:34.897 10:01:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:20:35.155 10:01:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f237eaf2-770f-4d27-adb5-b4d988032336 -t 2000 00:20:35.414 [ 00:20:35.414 { 00:20:35.414 "aliases": [ 00:20:35.414 "lvs/lvol" 00:20:35.414 ], 00:20:35.414 "assigned_rate_limits": { 00:20:35.414 "r_mbytes_per_sec": 0, 00:20:35.414 "rw_ios_per_sec": 0, 00:20:35.414 "rw_mbytes_per_sec": 0, 00:20:35.414 "w_mbytes_per_sec": 0 00:20:35.414 }, 00:20:35.414 "block_size": 4096, 00:20:35.414 "claimed": false, 00:20:35.414 "driver_specific": { 00:20:35.414 "lvol": { 00:20:35.414 "base_bdev": "aio_bdev", 00:20:35.414 "clone": false, 00:20:35.414 "esnap_clone": false, 00:20:35.414 "lvol_store_uuid": "8e98d36a-0bdb-4fa0-b8fb-c0e880a8fba8", 00:20:35.414 "num_allocated_clusters": 38, 00:20:35.414 "snapshot": false, 00:20:35.414 "thin_provision": false 00:20:35.414 } 00:20:35.414 }, 00:20:35.414 "name": "f237eaf2-770f-4d27-adb5-b4d988032336", 00:20:35.414 "num_blocks": 38912, 00:20:35.414 "product_name": "Logical Volume", 00:20:35.414 "supported_io_types": { 00:20:35.414 "abort": false, 00:20:35.414 "compare": false, 00:20:35.414 "compare_and_write": false, 00:20:35.414 "copy": false, 00:20:35.414 "flush": false, 00:20:35.414 "get_zone_info": false, 00:20:35.414 "nvme_admin": false, 00:20:35.414 "nvme_io": false, 00:20:35.414 "nvme_io_md": false, 00:20:35.414 "nvme_iov_md": false, 00:20:35.414 "read": true, 00:20:35.414 "reset": true, 00:20:35.414 "seek_data": true, 00:20:35.414 "seek_hole": true, 00:20:35.414 "unmap": true, 00:20:35.414 "write": true, 00:20:35.414 "write_zeroes": true, 00:20:35.414 "zcopy": false, 00:20:35.414 "zone_append": false, 00:20:35.414 "zone_management": false 00:20:35.414 }, 00:20:35.414 "uuid": "f237eaf2-770f-4d27-adb5-b4d988032336", 00:20:35.414 "zoned": false 00:20:35.414 } 00:20:35.414 ] 00:20:35.414 10:01:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:20:35.414 10:01:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e98d36a-0bdb-4fa0-b8fb-c0e880a8fba8 00:20:35.414 10:01:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:20:35.672 10:01:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:20:35.672 10:01:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:20:35.672 10:01:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e98d36a-0bdb-4fa0-b8fb-c0e880a8fba8 00:20:35.672 10:01:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:20:35.672 10:01:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:20:35.930 [2024-07-15 10:01:49.419336] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:20:35.930 10:01:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e98d36a-0bdb-4fa0-b8fb-c0e880a8fba8 00:20:35.930 10:01:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:20:35.930 10:01:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e98d36a-0bdb-4fa0-b8fb-c0e880a8fba8 00:20:35.930 10:01:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:35.930 10:01:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:35.930 10:01:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:35.930 10:01:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:35.930 10:01:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:35.930 10:01:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:35.930 10:01:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:35.930 10:01:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:20:35.930 10:01:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e98d36a-0bdb-4fa0-b8fb-c0e880a8fba8 00:20:36.189 2024/07/15 10:01:49 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:8e98d36a-0bdb-4fa0-b8fb-c0e880a8fba8], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:20:36.189 request: 00:20:36.189 { 00:20:36.189 "method": "bdev_lvol_get_lvstores", 00:20:36.189 "params": { 00:20:36.189 "uuid": "8e98d36a-0bdb-4fa0-b8fb-c0e880a8fba8" 00:20:36.189 } 00:20:36.189 } 00:20:36.189 Got JSON-RPC error response 00:20:36.189 GoRPCClient: error on JSON-RPC call 00:20:36.189 10:01:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:20:36.189 10:01:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:36.189 10:01:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:36.189 10:01:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:36.189 10:01:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:20:36.448 aio_bdev 00:20:36.448 10:01:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f237eaf2-770f-4d27-adb5-b4d988032336 00:20:36.448 10:01:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=f237eaf2-770f-4d27-adb5-b4d988032336 00:20:36.448 10:01:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:36.448 10:01:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:20:36.448 10:01:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:36.448 10:01:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:36.448 10:01:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:20:36.707 10:01:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f237eaf2-770f-4d27-adb5-b4d988032336 -t 2000 00:20:36.707 [ 00:20:36.707 { 00:20:36.707 "aliases": [ 00:20:36.707 "lvs/lvol" 00:20:36.707 ], 00:20:36.707 "assigned_rate_limits": { 00:20:36.707 "r_mbytes_per_sec": 0, 00:20:36.707 "rw_ios_per_sec": 0, 00:20:36.707 "rw_mbytes_per_sec": 0, 00:20:36.707 "w_mbytes_per_sec": 0 00:20:36.707 }, 00:20:36.707 "block_size": 4096, 00:20:36.707 "claimed": false, 00:20:36.707 "driver_specific": { 00:20:36.707 "lvol": { 00:20:36.707 "base_bdev": "aio_bdev", 00:20:36.707 "clone": false, 00:20:36.707 "esnap_clone": false, 00:20:36.707 "lvol_store_uuid": "8e98d36a-0bdb-4fa0-b8fb-c0e880a8fba8", 00:20:36.707 "num_allocated_clusters": 38, 00:20:36.707 "snapshot": false, 00:20:36.707 "thin_provision": false 00:20:36.707 } 00:20:36.707 }, 00:20:36.707 "name": "f237eaf2-770f-4d27-adb5-b4d988032336", 00:20:36.707 "num_blocks": 38912, 00:20:36.707 "product_name": "Logical Volume", 00:20:36.707 "supported_io_types": { 00:20:36.707 "abort": false, 00:20:36.707 "compare": false, 00:20:36.707 "compare_and_write": false, 00:20:36.707 "copy": false, 00:20:36.707 "flush": false, 00:20:36.707 "get_zone_info": false, 00:20:36.707 "nvme_admin": false, 00:20:36.707 "nvme_io": false, 00:20:36.707 "nvme_io_md": false, 00:20:36.707 "nvme_iov_md": false, 00:20:36.707 "read": true, 00:20:36.707 "reset": true, 00:20:36.707 "seek_data": true, 00:20:36.707 "seek_hole": true, 00:20:36.707 "unmap": true, 00:20:36.707 "write": true, 00:20:36.707 "write_zeroes": true, 00:20:36.707 "zcopy": false, 00:20:36.707 "zone_append": false, 00:20:36.707 "zone_management": false 00:20:36.707 }, 00:20:36.707 "uuid": "f237eaf2-770f-4d27-adb5-b4d988032336", 00:20:36.707 "zoned": false 00:20:36.707 } 00:20:36.707 ] 00:20:36.967 10:01:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:20:36.967 10:01:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e98d36a-0bdb-4fa0-b8fb-c0e880a8fba8 00:20:36.967 10:01:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:20:36.967 10:01:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:20:36.967 10:01:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e98d36a-0bdb-4fa0-b8fb-c0e880a8fba8 00:20:36.967 10:01:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:20:37.237 10:01:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:20:37.237 10:01:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete f237eaf2-770f-4d27-adb5-b4d988032336 00:20:37.500 10:01:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8e98d36a-0bdb-4fa0-b8fb-c0e880a8fba8 00:20:37.758 10:01:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:20:38.017 10:01:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:20:38.277 00:20:38.277 real 0m19.322s 00:20:38.277 user 0m40.624s 00:20:38.277 sys 0m6.954s 00:20:38.277 10:01:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:38.277 10:01:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:20:38.277 ************************************ 00:20:38.277 END TEST lvs_grow_dirty 00:20:38.277 ************************************ 00:20:38.536 10:01:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:20:38.536 10:01:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:20:38.536 10:01:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:20:38.536 10:01:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:20:38.536 10:01:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:20:38.536 10:01:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:38.536 10:01:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:20:38.536 10:01:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:20:38.536 10:01:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:20:38.536 10:01:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:38.536 nvmf_trace.0 00:20:38.536 10:01:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:20:38.536 10:01:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:20:38.536 10:01:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:38.536 10:01:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:20:38.796 10:01:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:38.796 10:01:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:20:38.796 10:01:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:38.796 10:01:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:38.796 rmmod nvme_tcp 00:20:38.796 rmmod nvme_fabrics 00:20:38.796 rmmod nvme_keyring 00:20:38.796 10:01:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:38.796 10:01:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:20:38.796 10:01:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:20:38.796 10:01:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 74552 ']' 00:20:38.796 10:01:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 74552 00:20:38.796 10:01:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 74552 ']' 00:20:38.796 10:01:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 74552 00:20:38.796 10:01:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:20:38.796 10:01:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:38.796 10:01:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74552 00:20:38.796 10:01:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:38.796 10:01:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:38.796 10:01:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74552' 00:20:38.796 killing process with pid 74552 00:20:38.796 10:01:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 74552 00:20:38.796 10:01:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 74552 00:20:39.055 10:01:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:39.055 10:01:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:39.055 10:01:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:39.055 10:01:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:39.055 10:01:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:39.055 10:01:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.055 10:01:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:39.055 10:01:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:39.055 10:01:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:39.055 00:20:39.055 real 0m39.393s 00:20:39.055 user 1m3.386s 00:20:39.055 sys 0m9.777s 00:20:39.055 10:01:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:39.055 10:01:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:20:39.055 ************************************ 00:20:39.055 END TEST nvmf_lvs_grow 00:20:39.055 ************************************ 00:20:39.055 10:01:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:39.055 10:01:52 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:20:39.055 10:01:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:39.055 10:01:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:39.055 10:01:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:39.055 ************************************ 00:20:39.055 START TEST nvmf_bdev_io_wait 00:20:39.055 ************************************ 00:20:39.055 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:20:39.315 * Looking for test storage... 00:20:39.315 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:39.315 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:39.315 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:20:39.315 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:39.315 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:39.315 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:39.315 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:39.315 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:39.315 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:39.315 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:39.315 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:39.315 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:39.315 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:39.316 Cannot find device "nvmf_tgt_br" 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:39.316 Cannot find device "nvmf_tgt_br2" 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:39.316 Cannot find device "nvmf_tgt_br" 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:39.316 Cannot find device "nvmf_tgt_br2" 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:20:39.316 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:39.576 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:39.576 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:39.576 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:39.576 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:20:39.576 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:39.576 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:39.576 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:20:39.576 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:39.576 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:39.576 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:39.576 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:39.576 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:39.576 10:01:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:39.576 10:01:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:39.576 10:01:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:39.576 10:01:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:39.576 10:01:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:39.576 10:01:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:39.576 10:01:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:39.576 10:01:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:39.576 10:01:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:39.577 10:01:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:39.577 10:01:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:39.577 10:01:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:39.577 10:01:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:39.577 10:01:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:39.577 10:01:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:39.577 10:01:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:39.577 10:01:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:39.577 10:01:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:39.577 10:01:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:39.577 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:39.577 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:20:39.577 00:20:39.577 --- 10.0.0.2 ping statistics --- 00:20:39.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:39.577 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:20:39.577 10:01:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:39.577 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:39.577 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:20:39.577 00:20:39.577 --- 10.0.0.3 ping statistics --- 00:20:39.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:39.577 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:20:39.577 10:01:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:39.577 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:39.577 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:20:39.577 00:20:39.577 --- 10.0.0.1 ping statistics --- 00:20:39.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:39.577 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:20:39.577 10:01:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:39.577 10:01:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:20:39.577 10:01:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:39.577 10:01:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:39.577 10:01:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:39.577 10:01:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:39.577 10:01:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:39.577 10:01:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:39.577 10:01:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:39.577 10:01:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:39.577 10:01:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:39.577 10:01:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:39.577 10:01:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:39.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:39.577 10:01:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=74968 00:20:39.577 10:01:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 74968 00:20:39.577 10:01:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 74968 ']' 00:20:39.577 10:01:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:39.577 10:01:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:39.577 10:01:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:39.577 10:01:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:39.577 10:01:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:39.577 10:01:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:39.837 [2024-07-15 10:01:53.184232] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:39.837 [2024-07-15 10:01:53.184304] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:39.837 [2024-07-15 10:01:53.327747] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:40.096 [2024-07-15 10:01:53.437412] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:40.096 [2024-07-15 10:01:53.437461] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:40.096 [2024-07-15 10:01:53.437468] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:40.096 [2024-07-15 10:01:53.437473] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:40.096 [2024-07-15 10:01:53.437478] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:40.096 [2024-07-15 10:01:53.437703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:40.096 [2024-07-15 10:01:53.437943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:40.096 [2024-07-15 10:01:53.437751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:40.096 [2024-07-15 10:01:53.437947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:40.733 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:40.733 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:20:40.733 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:40.733 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:40.733 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:40.733 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:40.733 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:20:40.733 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.733 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:40.733 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.733 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:20:40.733 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.733 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:40.733 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.733 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:40.733 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.733 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:40.733 [2024-07-15 10:01:54.226364] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:40.734 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.734 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:40.734 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.734 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:40.734 Malloc0 00:20:40.734 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.734 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:40.734 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.734 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:40.734 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.734 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:40.734 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.734 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:40.734 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.734 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:40.734 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.734 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:40.734 [2024-07-15 10:01:54.289491] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:40.734 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.734 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=75028 00:20:40.734 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:20:40.734 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:20:40.734 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:20:40.734 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=75030 00:20:40.734 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:20:40.734 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:40.734 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:40.734 { 00:20:40.734 "params": { 00:20:40.734 "name": "Nvme$subsystem", 00:20:40.734 "trtype": "$TEST_TRANSPORT", 00:20:40.734 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:40.734 "adrfam": "ipv4", 00:20:40.734 "trsvcid": "$NVMF_PORT", 00:20:40.734 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:40.734 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:40.734 "hdgst": ${hdgst:-false}, 00:20:40.734 "ddgst": ${ddgst:-false} 00:20:40.734 }, 00:20:40.734 "method": "bdev_nvme_attach_controller" 00:20:40.734 } 00:20:40.734 EOF 00:20:40.734 )") 00:20:40.734 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:20:40.734 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:20:40.734 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=75032 00:20:40.734 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:20:40.734 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:20:40.734 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:40.734 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:40.734 { 00:20:40.734 "params": { 00:20:40.734 "name": "Nvme$subsystem", 00:20:40.734 "trtype": "$TEST_TRANSPORT", 00:20:40.734 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:40.734 "adrfam": "ipv4", 00:20:40.734 "trsvcid": "$NVMF_PORT", 00:20:40.734 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:40.734 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:40.734 "hdgst": ${hdgst:-false}, 00:20:40.734 "ddgst": ${ddgst:-false} 00:20:40.734 }, 00:20:40.734 "method": "bdev_nvme_attach_controller" 00:20:40.734 } 00:20:40.734 EOF 00:20:40.734 )") 00:20:40.734 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:20:40.734 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:20:40.734 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=75035 00:20:40.734 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:20:40.734 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:20:40.734 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:20:40.734 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:20:40.734 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:20:40.734 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:40.734 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:40.734 { 00:20:40.734 "params": { 00:20:40.734 "name": "Nvme$subsystem", 00:20:40.734 "trtype": "$TEST_TRANSPORT", 00:20:40.734 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:40.734 "adrfam": "ipv4", 00:20:40.734 "trsvcid": "$NVMF_PORT", 00:20:40.734 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:40.734 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:40.734 "hdgst": ${hdgst:-false}, 00:20:40.734 "ddgst": ${ddgst:-false} 00:20:40.734 }, 00:20:40.734 "method": "bdev_nvme_attach_controller" 00:20:40.734 } 00:20:40.734 EOF 00:20:40.734 )") 00:20:40.734 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:20:40.734 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:20:40.734 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:20:40.734 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:20:40.734 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:20:40.734 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:40.734 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:40.734 { 00:20:40.734 "params": { 00:20:40.734 "name": "Nvme$subsystem", 00:20:40.734 "trtype": "$TEST_TRANSPORT", 00:20:40.734 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:40.734 "adrfam": "ipv4", 00:20:40.734 "trsvcid": "$NVMF_PORT", 00:20:40.734 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:40.734 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:40.734 "hdgst": ${hdgst:-false}, 00:20:40.734 "ddgst": ${ddgst:-false} 00:20:40.734 }, 00:20:40.734 "method": "bdev_nvme_attach_controller" 00:20:40.734 } 00:20:40.734 EOF 00:20:40.734 )") 00:20:40.734 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:20:40.734 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:20:40.734 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:40.734 "params": { 00:20:40.734 "name": "Nvme1", 00:20:40.734 "trtype": "tcp", 00:20:40.734 "traddr": "10.0.0.2", 00:20:40.734 "adrfam": "ipv4", 00:20:40.734 "trsvcid": "4420", 00:20:40.734 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:40.734 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:40.734 "hdgst": false, 00:20:40.734 "ddgst": false 00:20:40.734 }, 00:20:40.734 "method": "bdev_nvme_attach_controller" 00:20:40.734 }' 00:20:40.734 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:20:40.734 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:20:40.734 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:20:40.734 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:40.734 "params": { 00:20:40.734 "name": "Nvme1", 00:20:40.734 "trtype": "tcp", 00:20:40.734 "traddr": "10.0.0.2", 00:20:40.734 "adrfam": "ipv4", 00:20:40.734 "trsvcid": "4420", 00:20:40.734 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:40.734 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:40.734 "hdgst": false, 00:20:40.734 "ddgst": false 00:20:40.734 }, 00:20:40.734 "method": "bdev_nvme_attach_controller" 00:20:40.734 }' 00:20:40.993 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:20:40.993 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:20:40.993 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:40.993 "params": { 00:20:40.993 "name": "Nvme1", 00:20:40.993 "trtype": "tcp", 00:20:40.993 "traddr": "10.0.0.2", 00:20:40.993 "adrfam": "ipv4", 00:20:40.993 "trsvcid": "4420", 00:20:40.993 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:40.993 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:40.993 "hdgst": false, 00:20:40.993 "ddgst": false 00:20:40.993 }, 00:20:40.993 "method": "bdev_nvme_attach_controller" 00:20:40.993 }' 00:20:40.993 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:20:40.993 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:20:40.993 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:40.993 "params": { 00:20:40.993 "name": "Nvme1", 00:20:40.993 "trtype": "tcp", 00:20:40.993 "traddr": "10.0.0.2", 00:20:40.993 "adrfam": "ipv4", 00:20:40.993 "trsvcid": "4420", 00:20:40.993 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:40.993 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:40.993 "hdgst": false, 00:20:40.993 "ddgst": false 00:20:40.993 }, 00:20:40.993 "method": "bdev_nvme_attach_controller" 00:20:40.993 }' 00:20:40.993 [2024-07-15 10:01:54.356733] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:40.993 [2024-07-15 10:01:54.356801] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:20:40.993 10:01:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 75028 00:20:40.993 [2024-07-15 10:01:54.360951] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:40.993 [2024-07-15 10:01:54.361087] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:20:40.993 [2024-07-15 10:01:54.363955] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:40.993 [2024-07-15 10:01:54.364012] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:40.993 [2024-07-15 10:01:54.374510] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:40.994 [2024-07-15 10:01:54.374575] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:20:40.994 [2024-07-15 10:01:54.536550] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.252 [2024-07-15 10:01:54.600213] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.252 [2024-07-15 10:01:54.625745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:20:41.252 [2024-07-15 10:01:54.663283] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.252 [2024-07-15 10:01:54.687713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:41.252 [2024-07-15 10:01:54.724881] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.252 [2024-07-15 10:01:54.750701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:20:41.252 Running I/O for 1 seconds... 00:20:41.252 [2024-07-15 10:01:54.811220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:20:41.252 Running I/O for 1 seconds... 00:20:41.510 Running I/O for 1 seconds... 00:20:41.510 Running I/O for 1 seconds... 00:20:42.445 00:20:42.445 Latency(us) 00:20:42.445 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:42.445 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:20:42.445 Nvme1n1 : 1.00 202517.07 791.08 0.00 0.00 629.61 246.83 2332.39 00:20:42.445 =================================================================================================================== 00:20:42.445 Total : 202517.07 791.08 0.00 0.00 629.61 246.83 2332.39 00:20:42.445 00:20:42.445 Latency(us) 00:20:42.445 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:42.445 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:20:42.445 Nvme1n1 : 1.01 10915.18 42.64 0.00 0.00 11682.70 2089.14 13164.44 00:20:42.445 =================================================================================================================== 00:20:42.445 Total : 10915.18 42.64 0.00 0.00 11682.70 2089.14 13164.44 00:20:42.445 00:20:42.445 Latency(us) 00:20:42.445 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:42.445 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:20:42.445 Nvme1n1 : 1.01 8469.70 33.08 0.00 0.00 15042.08 8757.21 24955.19 00:20:42.445 =================================================================================================================== 00:20:42.445 Total : 8469.70 33.08 0.00 0.00 15042.08 8757.21 24955.19 00:20:42.445 00:20:42.445 Latency(us) 00:20:42.445 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:42.445 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:20:42.445 Nvme1n1 : 1.01 9231.49 36.06 0.00 0.00 13821.77 5494.72 26328.87 00:20:42.445 =================================================================================================================== 00:20:42.445 Total : 9231.49 36.06 0.00 0.00 13821.77 5494.72 26328.87 00:20:42.703 10:01:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 75030 00:20:42.703 10:01:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 75032 00:20:42.703 10:01:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 75035 00:20:42.703 10:01:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:42.703 10:01:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.703 10:01:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:42.703 10:01:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.703 10:01:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:20:42.703 10:01:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:20:42.703 10:01:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:42.703 10:01:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:20:42.703 10:01:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:42.703 10:01:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:20:42.703 10:01:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:42.703 10:01:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:42.703 rmmod nvme_tcp 00:20:42.703 rmmod nvme_fabrics 00:20:42.703 rmmod nvme_keyring 00:20:42.963 10:01:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:42.963 10:01:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:20:42.963 10:01:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:20:42.963 10:01:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 74968 ']' 00:20:42.963 10:01:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 74968 00:20:42.963 10:01:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 74968 ']' 00:20:42.963 10:01:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 74968 00:20:42.963 10:01:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:20:42.963 10:01:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:42.963 10:01:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74968 00:20:42.963 killing process with pid 74968 00:20:42.963 10:01:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:42.963 10:01:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:42.963 10:01:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74968' 00:20:42.963 10:01:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 74968 00:20:42.963 10:01:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 74968 00:20:42.963 10:01:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:42.963 10:01:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:42.963 10:01:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:42.963 10:01:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:42.963 10:01:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:42.963 10:01:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.963 10:01:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:42.963 10:01:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:43.223 10:01:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:43.223 00:20:43.223 real 0m3.954s 00:20:43.223 user 0m17.472s 00:20:43.223 sys 0m1.751s 00:20:43.223 10:01:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:43.223 10:01:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:43.223 ************************************ 00:20:43.223 END TEST nvmf_bdev_io_wait 00:20:43.223 ************************************ 00:20:43.223 10:01:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:43.223 10:01:56 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:20:43.223 10:01:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:43.223 10:01:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:43.223 10:01:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:43.223 ************************************ 00:20:43.223 START TEST nvmf_queue_depth 00:20:43.223 ************************************ 00:20:43.223 10:01:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:20:43.223 * Looking for test storage... 00:20:43.223 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:43.223 10:01:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:43.223 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:20:43.223 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:43.223 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:43.223 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:43.223 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:43.223 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:43.223 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:43.223 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:43.223 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:43.223 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:43.223 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:43.223 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:20:43.223 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:20:43.223 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:43.223 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:43.223 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:43.223 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:43.223 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:43.223 10:01:56 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:43.223 10:01:56 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:43.223 10:01:56 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:43.224 10:01:56 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.224 10:01:56 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.224 10:01:56 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.224 10:01:56 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:20:43.224 10:01:56 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.224 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:20:43.224 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:43.224 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:43.224 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:43.224 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:43.224 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:43.224 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:43.224 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:43.224 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:43.224 10:01:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:20:43.224 10:01:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:20:43.224 10:01:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:43.224 10:01:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:20:43.224 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:43.224 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:43.224 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:43.224 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:43.224 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:43.224 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:43.224 10:01:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:43.224 10:01:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:43.224 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:43.224 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:43.224 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:43.224 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:43.224 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:43.224 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:43.224 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:43.224 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:43.224 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:43.224 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:43.224 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:43.224 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:43.224 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:43.224 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:43.224 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:43.224 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:43.224 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:43.224 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:43.224 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:43.484 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:43.484 Cannot find device "nvmf_tgt_br" 00:20:43.484 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:20:43.484 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:43.484 Cannot find device "nvmf_tgt_br2" 00:20:43.484 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:20:43.484 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:43.484 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:43.484 Cannot find device "nvmf_tgt_br" 00:20:43.484 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:20:43.484 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:43.484 Cannot find device "nvmf_tgt_br2" 00:20:43.484 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:20:43.484 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:43.484 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:43.484 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:43.484 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:43.484 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:20:43.484 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:43.484 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:43.484 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:20:43.484 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:43.484 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:43.484 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:43.484 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:43.484 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:43.484 10:01:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:43.484 10:01:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:43.484 10:01:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:43.484 10:01:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:43.484 10:01:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:43.484 10:01:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:43.484 10:01:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:43.484 10:01:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:43.484 10:01:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:43.484 10:01:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:43.484 10:01:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:43.484 10:01:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:43.484 10:01:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:43.484 10:01:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:43.753 10:01:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:43.753 10:01:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:43.753 10:01:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:43.753 10:01:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:43.753 10:01:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:43.753 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:43.753 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 00:20:43.753 00:20:43.753 --- 10.0.0.2 ping statistics --- 00:20:43.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.753 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:20:43.753 10:01:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:43.753 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:43.753 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:20:43.753 00:20:43.753 --- 10.0.0.3 ping statistics --- 00:20:43.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.753 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:20:43.753 10:01:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:43.753 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:43.753 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:20:43.753 00:20:43.753 --- 10.0.0.1 ping statistics --- 00:20:43.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.753 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:20:43.753 10:01:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:43.753 10:01:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:20:43.753 10:01:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:43.753 10:01:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:43.753 10:01:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:43.753 10:01:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:43.753 10:01:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:43.753 10:01:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:43.753 10:01:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:43.753 10:01:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:20:43.753 10:01:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:43.753 10:01:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:43.753 10:01:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:43.753 10:01:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=75258 00:20:43.753 10:01:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:43.753 10:01:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 75258 00:20:43.753 10:01:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 75258 ']' 00:20:43.753 10:01:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:43.753 10:01:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:43.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:43.753 10:01:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:43.753 10:01:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:43.753 10:01:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:43.753 [2024-07-15 10:01:57.211095] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:43.753 [2024-07-15 10:01:57.211170] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:44.012 [2024-07-15 10:01:57.349965] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.012 [2024-07-15 10:01:57.457927] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:44.012 [2024-07-15 10:01:57.457972] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:44.012 [2024-07-15 10:01:57.457980] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:44.012 [2024-07-15 10:01:57.457985] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:44.012 [2024-07-15 10:01:57.457990] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:44.012 [2024-07-15 10:01:57.458013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:44.579 10:01:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:44.579 10:01:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:20:44.579 10:01:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:44.579 10:01:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:44.579 10:01:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:44.579 10:01:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:44.579 10:01:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:44.579 10:01:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.579 10:01:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:44.579 [2024-07-15 10:01:58.160587] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:44.837 10:01:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.837 10:01:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:44.837 10:01:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.837 10:01:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:44.837 Malloc0 00:20:44.837 10:01:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.837 10:01:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:44.837 10:01:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.837 10:01:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:44.837 10:01:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.837 10:01:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:44.837 10:01:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.837 10:01:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:44.837 10:01:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.837 10:01:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:44.837 10:01:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.837 10:01:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:44.837 [2024-07-15 10:01:58.227625] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:44.837 10:01:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.837 10:01:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=75309 00:20:44.837 10:01:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:20:44.837 10:01:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:44.837 10:01:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 75309 /var/tmp/bdevperf.sock 00:20:44.837 10:01:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 75309 ']' 00:20:44.837 10:01:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:44.838 10:01:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:44.838 10:01:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:44.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:44.838 10:01:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:44.838 10:01:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:44.838 [2024-07-15 10:01:58.287338] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:44.838 [2024-07-15 10:01:58.287416] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75309 ] 00:20:44.838 [2024-07-15 10:01:58.411119] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.096 [2024-07-15 10:01:58.538325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:45.664 10:01:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:45.664 10:01:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:20:45.664 10:01:59 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:45.664 10:01:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.664 10:01:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:45.922 NVMe0n1 00:20:45.922 10:01:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.922 10:01:59 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:45.922 Running I/O for 10 seconds... 00:20:55.941 00:20:55.941 Latency(us) 00:20:55.941 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:55.941 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:20:55.941 Verification LBA range: start 0x0 length 0x4000 00:20:55.941 NVMe0n1 : 10.05 11317.95 44.21 0.00 0.00 90170.99 11161.15 66394.55 00:20:55.941 =================================================================================================================== 00:20:55.941 Total : 11317.95 44.21 0.00 0.00 90170.99 11161.15 66394.55 00:20:55.941 0 00:20:55.941 10:02:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 75309 00:20:55.941 10:02:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 75309 ']' 00:20:55.941 10:02:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 75309 00:20:55.941 10:02:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:20:55.941 10:02:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:55.941 10:02:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75309 00:20:55.941 10:02:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:55.941 10:02:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:55.941 10:02:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75309' 00:20:55.941 killing process with pid 75309 00:20:55.941 10:02:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 75309 00:20:55.941 Received shutdown signal, test time was about 10.000000 seconds 00:20:55.941 00:20:55.941 Latency(us) 00:20:55.941 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:55.941 =================================================================================================================== 00:20:55.941 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:55.941 10:02:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 75309 00:20:56.201 10:02:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:20:56.201 10:02:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:20:56.201 10:02:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:56.201 10:02:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:20:56.201 10:02:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:56.201 10:02:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:20:56.201 10:02:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:56.201 10:02:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:56.201 rmmod nvme_tcp 00:20:56.201 rmmod nvme_fabrics 00:20:56.201 rmmod nvme_keyring 00:20:56.201 10:02:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:56.201 10:02:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:20:56.201 10:02:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:20:56.201 10:02:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 75258 ']' 00:20:56.201 10:02:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 75258 00:20:56.201 10:02:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 75258 ']' 00:20:56.201 10:02:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 75258 00:20:56.201 10:02:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:20:56.201 10:02:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:56.201 10:02:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75258 00:20:56.201 10:02:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:56.201 10:02:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:56.201 10:02:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75258' 00:20:56.201 killing process with pid 75258 00:20:56.201 10:02:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 75258 00:20:56.201 10:02:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 75258 00:20:56.461 10:02:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:56.461 10:02:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:56.461 10:02:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:56.461 10:02:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:56.461 10:02:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:56.461 10:02:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:56.461 10:02:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:56.461 10:02:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:56.461 10:02:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:56.461 00:20:56.461 real 0m13.387s 00:20:56.461 user 0m23.215s 00:20:56.461 sys 0m1.903s 00:20:56.461 10:02:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:56.461 10:02:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:56.461 ************************************ 00:20:56.461 END TEST nvmf_queue_depth 00:20:56.461 ************************************ 00:20:56.721 10:02:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:56.721 10:02:10 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:20:56.721 10:02:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:56.721 10:02:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:56.721 10:02:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:56.721 ************************************ 00:20:56.721 START TEST nvmf_target_multipath 00:20:56.721 ************************************ 00:20:56.721 10:02:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:20:56.721 * Looking for test storage... 00:20:56.721 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:56.721 10:02:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:56.721 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:20:56.721 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:56.721 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:56.721 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:56.721 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:56.721 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:56.721 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:56.721 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:56.721 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:56.721 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:56.721 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:56.721 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:20:56.721 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:20:56.721 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:56.721 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:56.721 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:56.721 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:56.721 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:56.721 10:02:10 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:56.721 10:02:10 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:56.721 10:02:10 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:56.721 10:02:10 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.721 10:02:10 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.721 10:02:10 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.721 10:02:10 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:20:56.721 10:02:10 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.721 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:20:56.721 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:56.721 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:56.721 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:56.721 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:56.721 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:56.721 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:56.721 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:56.721 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:56.721 10:02:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:56.721 10:02:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:56.721 10:02:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:20:56.721 10:02:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:56.721 10:02:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:20:56.722 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:56.722 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:56.722 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:56.722 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:56.722 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:56.722 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:56.722 10:02:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:56.722 10:02:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:56.722 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:56.722 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:56.722 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:56.722 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:56.722 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:56.722 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:56.722 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:56.722 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:56.722 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:56.722 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:56.722 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:56.722 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:56.722 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:56.722 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:56.722 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:56.722 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:56.722 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:56.722 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:56.722 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:56.722 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:56.722 Cannot find device "nvmf_tgt_br" 00:20:56.722 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:20:56.722 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:56.722 Cannot find device "nvmf_tgt_br2" 00:20:56.722 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:20:56.722 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:56.722 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:56.980 Cannot find device "nvmf_tgt_br" 00:20:56.980 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:20:56.980 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:56.980 Cannot find device "nvmf_tgt_br2" 00:20:56.980 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:20:56.981 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:56.981 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:56.981 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:56.981 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:56.981 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:20:56.981 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:56.981 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:56.981 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:20:56.981 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:56.981 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:56.981 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:56.981 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:56.981 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:56.981 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:56.981 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:56.981 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:56.981 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:56.981 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:56.981 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:56.981 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:56.981 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:56.981 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:56.981 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:56.981 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:56.981 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:56.981 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:56.981 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:56.981 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:56.981 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:56.981 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:56.981 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:56.981 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:56.981 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:56.981 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:20:56.981 00:20:56.981 --- 10.0.0.2 ping statistics --- 00:20:56.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:56.981 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:20:56.981 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:56.981 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:56.981 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:20:56.981 00:20:56.981 --- 10.0.0.3 ping statistics --- 00:20:56.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:56.981 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:20:56.981 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:56.981 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:56.981 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:20:56.981 00:20:56.981 --- 10.0.0.1 ping statistics --- 00:20:56.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:56.981 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:20:56.981 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:56.981 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:20:56.981 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:56.981 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:56.981 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:56.981 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:56.981 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:56.981 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:56.981 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:57.239 10:02:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:20:57.239 10:02:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:20:57.239 10:02:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:20:57.239 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:57.239 10:02:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:57.239 10:02:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:57.239 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=75635 00:20:57.239 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:57.239 10:02:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 75635 00:20:57.239 10:02:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@829 -- # '[' -z 75635 ']' 00:20:57.239 10:02:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:57.239 10:02:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:57.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:57.239 10:02:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:57.239 10:02:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:57.239 10:02:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:57.239 [2024-07-15 10:02:10.663424] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:57.239 [2024-07-15 10:02:10.663488] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:57.239 [2024-07-15 10:02:10.802170] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:57.497 [2024-07-15 10:02:10.901330] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:57.497 [2024-07-15 10:02:10.901380] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:57.497 [2024-07-15 10:02:10.901387] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:57.497 [2024-07-15 10:02:10.901391] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:57.497 [2024-07-15 10:02:10.901395] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:57.497 [2024-07-15 10:02:10.901706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:57.497 [2024-07-15 10:02:10.901848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:57.497 [2024-07-15 10:02:10.901958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:57.497 [2024-07-15 10:02:10.901972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:58.064 10:02:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:58.064 10:02:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@862 -- # return 0 00:20:58.064 10:02:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:58.064 10:02:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:58.064 10:02:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:58.064 10:02:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:58.064 10:02:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:58.323 [2024-07-15 10:02:11.759522] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:58.323 10:02:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:58.581 Malloc0 00:20:58.581 10:02:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:20:58.840 10:02:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:59.099 10:02:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:59.099 [2024-07-15 10:02:12.644732] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:59.099 10:02:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:59.356 [2024-07-15 10:02:12.848514] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:59.356 10:02:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid=a2b6b25a-cc90-4aea-9f09-c06f8a634aec -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:20:59.613 10:02:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid=a2b6b25a-cc90-4aea-9f09-c06f8a634aec -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:20:59.870 10:02:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:20:59.870 10:02:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:20:59.870 10:02:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:20:59.870 10:02:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:20:59.870 10:02:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:21:01.772 10:02:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:21:01.772 10:02:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:21:01.772 10:02:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:21:01.772 10:02:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:21:01.772 10:02:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:21:01.772 10:02:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:21:01.772 10:02:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:21:01.772 10:02:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:21:01.772 10:02:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:21:01.772 10:02:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:21:01.772 10:02:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:21:01.772 10:02:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:21:01.772 10:02:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:21:01.772 10:02:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:21:01.772 10:02:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:21:01.772 10:02:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:21:01.772 10:02:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:21:01.772 10:02:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:21:01.772 10:02:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:21:01.772 10:02:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:21:01.772 10:02:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:21:01.772 10:02:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:21:01.772 10:02:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:21:01.772 10:02:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:21:01.772 10:02:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:21:01.772 10:02:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:21:01.772 10:02:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:21:01.772 10:02:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:21:01.772 10:02:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:21:01.772 10:02:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:21:01.773 10:02:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:21:01.773 10:02:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:21:01.773 10:02:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=75778 00:21:01.773 10:02:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:21:01.773 10:02:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:21:02.031 [global] 00:21:02.031 thread=1 00:21:02.031 invalidate=1 00:21:02.031 rw=randrw 00:21:02.031 time_based=1 00:21:02.031 runtime=6 00:21:02.031 ioengine=libaio 00:21:02.031 direct=1 00:21:02.031 bs=4096 00:21:02.031 iodepth=128 00:21:02.031 norandommap=0 00:21:02.031 numjobs=1 00:21:02.031 00:21:02.031 verify_dump=1 00:21:02.031 verify_backlog=512 00:21:02.031 verify_state_save=0 00:21:02.031 do_verify=1 00:21:02.031 verify=crc32c-intel 00:21:02.031 [job0] 00:21:02.031 filename=/dev/nvme0n1 00:21:02.031 Could not set queue depth (nvme0n1) 00:21:02.031 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:02.031 fio-3.35 00:21:02.031 Starting 1 thread 00:21:02.969 10:02:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:21:03.229 10:02:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:03.229 10:02:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:21:03.229 10:02:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:21:03.229 10:02:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:21:03.229 10:02:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:21:03.229 10:02:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:21:03.229 10:02:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:21:03.229 10:02:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:21:03.229 10:02:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:21:03.229 10:02:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:21:03.524 10:02:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:21:03.524 10:02:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:21:03.524 10:02:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:21:03.524 10:02:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:21:04.460 10:02:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:21:04.460 10:02:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:21:04.460 10:02:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:21:04.460 10:02:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:04.719 10:02:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:21:04.978 10:02:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:21:04.978 10:02:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:21:04.978 10:02:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:21:04.978 10:02:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:21:04.978 10:02:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:21:04.978 10:02:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:21:04.978 10:02:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:21:04.978 10:02:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:21:04.978 10:02:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:21:04.978 10:02:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:21:04.978 10:02:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:21:04.978 10:02:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:21:04.978 10:02:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:21:05.914 10:02:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:21:05.914 10:02:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:21:05.914 10:02:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:21:05.914 10:02:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 75778 00:21:08.449 00:21:08.449 job0: (groupid=0, jobs=1): err= 0: pid=75799: Mon Jul 15 10:02:21 2024 00:21:08.449 read: IOPS=13.2k, BW=51.5MiB/s (54.0MB/s)(309MiB/6006msec) 00:21:08.449 slat (usec): min=4, max=6020, avg=41.55, stdev=170.84 00:21:08.449 clat (usec): min=393, max=13675, avg=6703.06, stdev=1137.51 00:21:08.449 lat (usec): min=424, max=13703, avg=6744.61, stdev=1142.58 00:21:08.449 clat percentiles (usec): 00:21:08.449 | 1.00th=[ 3982], 5.00th=[ 5080], 10.00th=[ 5473], 20.00th=[ 5866], 00:21:08.449 | 30.00th=[ 6194], 40.00th=[ 6456], 50.00th=[ 6652], 60.00th=[ 6849], 00:21:08.449 | 70.00th=[ 7111], 80.00th=[ 7439], 90.00th=[ 7963], 95.00th=[ 8586], 00:21:08.449 | 99.00th=[10159], 99.50th=[10552], 99.90th=[11600], 99.95th=[11994], 00:21:08.449 | 99.99th=[12911] 00:21:08.449 bw ( KiB/s): min=11337, max=31968, per=51.90%, avg=27368.73, stdev=6480.58, samples=11 00:21:08.449 iops : min= 2834, max= 7992, avg=6842.27, stdev=1620.30, samples=11 00:21:08.449 write: IOPS=7628, BW=29.8MiB/s (31.2MB/s)(157MiB/5256msec); 0 zone resets 00:21:08.449 slat (usec): min=11, max=2757, avg=56.57, stdev=111.55 00:21:08.449 clat (usec): min=293, max=12154, avg=5738.45, stdev=1031.98 00:21:08.449 lat (usec): min=456, max=12174, avg=5795.01, stdev=1034.00 00:21:08.449 clat percentiles (usec): 00:21:08.449 | 1.00th=[ 2802], 5.00th=[ 4113], 10.00th=[ 4555], 20.00th=[ 5080], 00:21:08.449 | 30.00th=[ 5342], 40.00th=[ 5604], 50.00th=[ 5800], 60.00th=[ 5932], 00:21:08.449 | 70.00th=[ 6128], 80.00th=[ 6390], 90.00th=[ 6783], 95.00th=[ 7177], 00:21:08.449 | 99.00th=[ 8848], 99.50th=[ 9503], 99.90th=[10814], 99.95th=[11207], 00:21:08.449 | 99.99th=[11863] 00:21:08.449 bw ( KiB/s): min=11680, max=31616, per=89.60%, avg=27341.36, stdev=6191.17, samples=11 00:21:08.449 iops : min= 2922, max= 7904, avg=6835.27, stdev=1547.20, samples=11 00:21:08.449 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.03% 00:21:08.449 lat (msec) : 2=0.18%, 4=1.86%, 10=96.97%, 20=0.93% 00:21:08.449 cpu : usr=6.33%, sys=31.32%, ctx=8855, majf=0, minf=96 00:21:08.449 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:21:08.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:08.449 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:08.449 issued rwts: total=79173,40095,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:08.449 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:08.449 00:21:08.449 Run status group 0 (all jobs): 00:21:08.449 READ: bw=51.5MiB/s (54.0MB/s), 51.5MiB/s-51.5MiB/s (54.0MB/s-54.0MB/s), io=309MiB (324MB), run=6006-6006msec 00:21:08.449 WRITE: bw=29.8MiB/s (31.2MB/s), 29.8MiB/s-29.8MiB/s (31.2MB/s-31.2MB/s), io=157MiB (164MB), run=5256-5256msec 00:21:08.449 00:21:08.449 Disk stats (read/write): 00:21:08.449 nvme0n1: ios=78163/39236, merge=0/0, ticks=471030/198786, in_queue=669816, util=98.60% 00:21:08.449 10:02:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:21:08.449 10:02:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:21:08.709 10:02:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:21:08.709 10:02:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:21:08.709 10:02:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:21:08.709 10:02:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:21:08.709 10:02:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:21:08.709 10:02:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:21:08.709 10:02:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:21:08.709 10:02:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:21:08.709 10:02:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:21:08.709 10:02:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:21:08.709 10:02:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:21:08.709 10:02:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:21:08.709 10:02:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:21:09.647 10:02:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:21:09.647 10:02:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:21:09.647 10:02:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:21:09.647 10:02:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:21:09.647 10:02:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=75933 00:21:09.647 10:02:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:21:09.647 10:02:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:21:09.647 [global] 00:21:09.647 thread=1 00:21:09.647 invalidate=1 00:21:09.647 rw=randrw 00:21:09.647 time_based=1 00:21:09.647 runtime=6 00:21:09.647 ioengine=libaio 00:21:09.647 direct=1 00:21:09.647 bs=4096 00:21:09.647 iodepth=128 00:21:09.647 norandommap=0 00:21:09.647 numjobs=1 00:21:09.647 00:21:09.647 verify_dump=1 00:21:09.647 verify_backlog=512 00:21:09.647 verify_state_save=0 00:21:09.647 do_verify=1 00:21:09.647 verify=crc32c-intel 00:21:09.647 [job0] 00:21:09.647 filename=/dev/nvme0n1 00:21:09.647 Could not set queue depth (nvme0n1) 00:21:09.936 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:09.936 fio-3.35 00:21:09.936 Starting 1 thread 00:21:10.877 10:02:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:21:10.877 10:02:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:11.136 10:02:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:21:11.136 10:02:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:21:11.136 10:02:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:21:11.136 10:02:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:21:11.136 10:02:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:21:11.136 10:02:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:21:11.136 10:02:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:21:11.136 10:02:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:21:11.136 10:02:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:21:11.136 10:02:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:21:11.136 10:02:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:21:11.136 10:02:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:21:11.136 10:02:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:21:12.073 10:02:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:21:12.073 10:02:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:21:12.073 10:02:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:21:12.073 10:02:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:12.332 10:02:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:21:12.591 10:02:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:21:12.591 10:02:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:21:12.591 10:02:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:21:12.591 10:02:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:21:12.591 10:02:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:21:12.591 10:02:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:21:12.591 10:02:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:21:12.591 10:02:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:21:12.591 10:02:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:21:12.591 10:02:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:21:12.591 10:02:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:21:12.591 10:02:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:21:12.591 10:02:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:21:13.524 10:02:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:21:13.524 10:02:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:21:13.524 10:02:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:21:13.524 10:02:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 75933 00:21:16.064 00:21:16.064 job0: (groupid=0, jobs=1): err= 0: pid=75954: Mon Jul 15 10:02:29 2024 00:21:16.064 read: IOPS=14.4k, BW=56.3MiB/s (59.1MB/s)(338MiB/6004msec) 00:21:16.064 slat (usec): min=4, max=6654, avg=34.07, stdev=144.34 00:21:16.064 clat (usec): min=255, max=15758, avg=6125.27, stdev=1275.72 00:21:16.064 lat (usec): min=266, max=15809, avg=6159.34, stdev=1284.91 00:21:16.064 clat percentiles (usec): 00:21:16.064 | 1.00th=[ 3326], 5.00th=[ 4113], 10.00th=[ 4490], 20.00th=[ 5145], 00:21:16.064 | 30.00th=[ 5538], 40.00th=[ 5866], 50.00th=[ 6128], 60.00th=[ 6390], 00:21:16.064 | 70.00th=[ 6652], 80.00th=[ 6980], 90.00th=[ 7504], 95.00th=[ 8225], 00:21:16.064 | 99.00th=[ 9896], 99.50th=[10552], 99.90th=[12256], 99.95th=[12649], 00:21:16.064 | 99.99th=[14091] 00:21:16.064 bw ( KiB/s): min=16704, max=40598, per=51.19%, avg=29533.64, stdev=8941.30, samples=11 00:21:16.064 iops : min= 4176, max=10149, avg=7383.36, stdev=2235.26, samples=11 00:21:16.064 write: IOPS=8608, BW=33.6MiB/s (35.3MB/s)(175MiB/5203msec); 0 zone resets 00:21:16.064 slat (usec): min=10, max=3139, avg=48.72, stdev=96.14 00:21:16.064 clat (usec): min=189, max=12762, avg=5144.69, stdev=1211.50 00:21:16.064 lat (usec): min=252, max=12789, avg=5193.42, stdev=1219.21 00:21:16.064 clat percentiles (usec): 00:21:16.064 | 1.00th=[ 2606], 5.00th=[ 3163], 10.00th=[ 3523], 20.00th=[ 4080], 00:21:16.064 | 30.00th=[ 4621], 40.00th=[ 5014], 50.00th=[ 5276], 60.00th=[ 5473], 00:21:16.064 | 70.00th=[ 5735], 80.00th=[ 5997], 90.00th=[ 6390], 95.00th=[ 6849], 00:21:16.064 | 99.00th=[ 8717], 99.50th=[ 9503], 99.90th=[10814], 99.95th=[11207], 00:21:16.064 | 99.99th=[11731] 00:21:16.064 bw ( KiB/s): min=17496, max=40960, per=85.93%, avg=29591.91, stdev=8608.06, samples=11 00:21:16.064 iops : min= 4374, max=10240, avg=7398.09, stdev=2151.94, samples=11 00:21:16.064 lat (usec) : 250=0.01%, 500=0.02%, 750=0.05%, 1000=0.05% 00:21:16.064 lat (msec) : 2=0.17%, 4=8.71%, 10=90.30%, 20=0.68% 00:21:16.064 cpu : usr=5.96%, sys=32.48%, ctx=10230, majf=0, minf=121 00:21:16.064 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:21:16.064 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.064 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:16.064 issued rwts: total=86598,44792,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.064 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:16.064 00:21:16.064 Run status group 0 (all jobs): 00:21:16.064 READ: bw=56.3MiB/s (59.1MB/s), 56.3MiB/s-56.3MiB/s (59.1MB/s-59.1MB/s), io=338MiB (355MB), run=6004-6004msec 00:21:16.064 WRITE: bw=33.6MiB/s (35.3MB/s), 33.6MiB/s-33.6MiB/s (35.3MB/s-35.3MB/s), io=175MiB (183MB), run=5203-5203msec 00:21:16.064 00:21:16.064 Disk stats (read/write): 00:21:16.064 nvme0n1: ios=85814/43775, merge=0/0, ticks=469113/194223, in_queue=663336, util=98.63% 00:21:16.064 10:02:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:16.064 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:21:16.064 10:02:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:16.064 10:02:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:21:16.064 10:02:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:16.064 10:02:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:16.064 10:02:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:16.064 10:02:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:16.064 10:02:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:21:16.064 10:02:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:16.324 10:02:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:21:16.324 10:02:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:21:16.324 10:02:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:21:16.324 10:02:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:21:16.324 10:02:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:16.324 10:02:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:21:16.324 10:02:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:16.324 10:02:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:21:16.324 10:02:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:16.324 10:02:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:16.324 rmmod nvme_tcp 00:21:16.324 rmmod nvme_fabrics 00:21:16.324 rmmod nvme_keyring 00:21:16.324 10:02:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:16.324 10:02:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:21:16.324 10:02:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:21:16.324 10:02:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 75635 ']' 00:21:16.324 10:02:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 75635 00:21:16.324 10:02:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@948 -- # '[' -z 75635 ']' 00:21:16.324 10:02:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # kill -0 75635 00:21:16.324 10:02:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # uname 00:21:16.324 10:02:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:16.324 10:02:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75635 00:21:16.324 killing process with pid 75635 00:21:16.324 10:02:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:16.324 10:02:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:16.324 10:02:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75635' 00:21:16.324 10:02:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@967 -- # kill 75635 00:21:16.324 10:02:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@972 -- # wait 75635 00:21:16.583 10:02:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:16.583 10:02:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:16.583 10:02:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:16.583 10:02:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:16.583 10:02:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:16.583 10:02:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:16.583 10:02:30 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:16.583 10:02:30 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.583 10:02:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:16.844 00:21:16.844 real 0m20.085s 00:21:16.844 user 1m18.759s 00:21:16.844 sys 0m6.937s 00:21:16.844 10:02:30 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:16.844 10:02:30 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:16.844 ************************************ 00:21:16.844 END TEST nvmf_target_multipath 00:21:16.844 ************************************ 00:21:16.844 10:02:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:16.844 10:02:30 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:21:16.844 10:02:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:16.844 10:02:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:16.844 10:02:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:16.844 ************************************ 00:21:16.844 START TEST nvmf_zcopy 00:21:16.844 ************************************ 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:21:16.844 * Looking for test storage... 00:21:16.844 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:16.844 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:17.104 Cannot find device "nvmf_tgt_br" 00:21:17.104 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:21:17.104 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:17.104 Cannot find device "nvmf_tgt_br2" 00:21:17.104 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:21:17.104 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:17.104 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:17.104 Cannot find device "nvmf_tgt_br" 00:21:17.104 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:21:17.104 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:17.104 Cannot find device "nvmf_tgt_br2" 00:21:17.104 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:21:17.104 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:17.104 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:17.104 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:17.104 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:17.104 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:21:17.104 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:17.104 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:17.104 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:21:17.104 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:17.104 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:17.104 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:17.104 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:17.104 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:17.104 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:17.104 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:17.104 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:17.104 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:17.104 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:17.104 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:17.104 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:17.104 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:17.104 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:17.363 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:17.363 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:17.363 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:17.363 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:17.363 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:17.363 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:17.363 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:17.363 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:17.363 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:17.363 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:17.363 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:17.363 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:21:17.363 00:21:17.363 --- 10.0.0.2 ping statistics --- 00:21:17.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:17.363 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:21:17.363 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:17.363 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:17.363 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:21:17.363 00:21:17.363 --- 10.0.0.3 ping statistics --- 00:21:17.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:17.363 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:21:17.363 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:17.363 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:17.363 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:21:17.363 00:21:17.363 --- 10.0.0.1 ping statistics --- 00:21:17.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:17.363 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:21:17.363 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:17.363 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:21:17.363 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:17.363 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:17.363 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:17.363 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:17.363 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:17.363 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:17.363 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:17.363 10:02:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:21:17.363 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:17.363 10:02:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:17.363 10:02:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:17.363 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=76231 00:21:17.363 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:17.363 10:02:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 76231 00:21:17.363 10:02:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 76231 ']' 00:21:17.363 10:02:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:17.363 10:02:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:17.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:17.363 10:02:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:17.363 10:02:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:17.363 10:02:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:17.363 [2024-07-15 10:02:30.871154] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:17.363 [2024-07-15 10:02:30.871222] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:17.621 [2024-07-15 10:02:31.008011] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.621 [2024-07-15 10:02:31.110283] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:17.621 [2024-07-15 10:02:31.110327] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:17.621 [2024-07-15 10:02:31.110333] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:17.621 [2024-07-15 10:02:31.110354] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:17.621 [2024-07-15 10:02:31.110359] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:17.621 [2024-07-15 10:02:31.110395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:18.190 10:02:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:18.190 10:02:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:21:18.190 10:02:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:18.190 10:02:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:18.190 10:02:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:18.190 10:02:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:18.190 10:02:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:21:18.190 10:02:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:21:18.190 10:02:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.190 10:02:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:18.190 [2024-07-15 10:02:31.771103] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:18.449 10:02:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.449 10:02:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:21:18.449 10:02:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.449 10:02:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:18.449 10:02:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.449 10:02:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:18.449 10:02:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.449 10:02:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:18.449 [2024-07-15 10:02:31.795149] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:18.449 10:02:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.449 10:02:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:18.449 10:02:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.449 10:02:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:18.449 10:02:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.449 10:02:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:21:18.449 10:02:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.449 10:02:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:18.449 malloc0 00:21:18.449 10:02:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.449 10:02:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:18.449 10:02:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.449 10:02:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:18.449 10:02:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.449 10:02:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:21:18.449 10:02:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:21:18.449 10:02:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:21:18.449 10:02:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:21:18.449 10:02:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:18.449 10:02:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:18.449 { 00:21:18.449 "params": { 00:21:18.449 "name": "Nvme$subsystem", 00:21:18.449 "trtype": "$TEST_TRANSPORT", 00:21:18.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:18.449 "adrfam": "ipv4", 00:21:18.449 "trsvcid": "$NVMF_PORT", 00:21:18.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:18.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:18.449 "hdgst": ${hdgst:-false}, 00:21:18.449 "ddgst": ${ddgst:-false} 00:21:18.449 }, 00:21:18.449 "method": "bdev_nvme_attach_controller" 00:21:18.449 } 00:21:18.449 EOF 00:21:18.449 )") 00:21:18.449 10:02:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:21:18.449 10:02:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:21:18.449 10:02:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:21:18.449 10:02:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:18.449 "params": { 00:21:18.449 "name": "Nvme1", 00:21:18.449 "trtype": "tcp", 00:21:18.449 "traddr": "10.0.0.2", 00:21:18.449 "adrfam": "ipv4", 00:21:18.449 "trsvcid": "4420", 00:21:18.449 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:18.449 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:18.449 "hdgst": false, 00:21:18.449 "ddgst": false 00:21:18.449 }, 00:21:18.449 "method": "bdev_nvme_attach_controller" 00:21:18.449 }' 00:21:18.449 [2024-07-15 10:02:31.893119] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:18.449 [2024-07-15 10:02:31.893172] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76282 ] 00:21:18.449 [2024-07-15 10:02:32.029297] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.709 [2024-07-15 10:02:32.135455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:18.709 Running I/O for 10 seconds... 00:21:28.719 00:21:28.719 Latency(us) 00:21:28.719 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:28.719 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:21:28.719 Verification LBA range: start 0x0 length 0x1000 00:21:28.719 Nvme1n1 : 10.01 8350.00 65.23 0.00 0.00 15284.71 339.84 25756.51 00:21:28.719 =================================================================================================================== 00:21:28.719 Total : 8350.00 65.23 0.00 0.00 15284.71 339.84 25756.51 00:21:28.979 10:02:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=76399 00:21:28.979 10:02:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:21:28.979 10:02:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:21:28.979 10:02:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:21:28.979 10:02:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:21:28.979 10:02:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:28.979 10:02:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:28.979 { 00:21:28.979 "params": { 00:21:28.979 "name": "Nvme$subsystem", 00:21:28.979 "trtype": "$TEST_TRANSPORT", 00:21:28.979 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:28.979 "adrfam": "ipv4", 00:21:28.979 "trsvcid": "$NVMF_PORT", 00:21:28.979 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:28.979 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:28.979 "hdgst": ${hdgst:-false}, 00:21:28.979 "ddgst": ${ddgst:-false} 00:21:28.979 }, 00:21:28.979 "method": "bdev_nvme_attach_controller" 00:21:28.979 } 00:21:28.979 EOF 00:21:28.979 )") 00:21:28.979 10:02:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:28.979 10:02:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:21:28.979 10:02:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:21:28.979 10:02:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:21:28.979 10:02:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:21:28.979 10:02:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:28.979 "params": { 00:21:28.979 "name": "Nvme1", 00:21:28.979 "trtype": "tcp", 00:21:28.979 "traddr": "10.0.0.2", 00:21:28.979 "adrfam": "ipv4", 00:21:28.979 "trsvcid": "4420", 00:21:28.979 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:28.979 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:28.979 "hdgst": false, 00:21:28.979 "ddgst": false 00:21:28.979 }, 00:21:28.979 "method": "bdev_nvme_attach_controller" 00:21:28.979 }' 00:21:28.979 [2024-07-15 10:02:42.482515] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.979 [2024-07-15 10:02:42.482551] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.979 2024/07/15 10:02:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:28.979 [2024-07-15 10:02:42.494454] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.979 [2024-07-15 10:02:42.494470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.979 2024/07/15 10:02:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:28.979 [2024-07-15 10:02:42.506422] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.979 [2024-07-15 10:02:42.506437] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.979 2024/07/15 10:02:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:28.979 [2024-07-15 10:02:42.518396] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.979 [2024-07-15 10:02:42.518410] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.979 [2024-07-15 10:02:42.521823] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:28.979 [2024-07-15 10:02:42.521872] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76399 ] 00:21:28.979 2024/07/15 10:02:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:28.979 [2024-07-15 10:02:42.530390] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.979 [2024-07-15 10:02:42.530405] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.979 2024/07/15 10:02:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:28.979 [2024-07-15 10:02:42.542387] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.979 [2024-07-15 10:02:42.542409] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.979 2024/07/15 10:02:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:28.979 [2024-07-15 10:02:42.554387] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:28.979 [2024-07-15 10:02:42.554419] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:28.979 2024/07/15 10:02:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:29.238 [2024-07-15 10:02:42.566386] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:29.238 [2024-07-15 10:02:42.566418] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:29.238 2024/07/15 10:02:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:29.238 [2024-07-15 10:02:42.578323] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:29.238 [2024-07-15 10:02:42.578348] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:29.238 2024/07/15 10:02:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:29.238 [2024-07-15 10:02:42.590286] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:29.238 [2024-07-15 10:02:42.590303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:29.238 2024/07/15 10:02:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:29.238 [2024-07-15 10:02:42.602268] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:29.238 [2024-07-15 10:02:42.602283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:29.238 2024/07/15 10:02:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:29.238 [2024-07-15 10:02:42.614251] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:29.238 [2024-07-15 10:02:42.614267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:29.238 2024/07/15 10:02:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:29.238 [2024-07-15 10:02:42.626225] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:29.238 [2024-07-15 10:02:42.626240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:29.238 2024/07/15 10:02:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:29.238 [2024-07-15 10:02:42.638203] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:29.238 [2024-07-15 10:02:42.638217] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:29.238 2024/07/15 10:02:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:29.238 [2024-07-15 10:02:42.650182] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:29.238 [2024-07-15 10:02:42.650194] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:29.238 2024/07/15 10:02:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:29.238 [2024-07-15 10:02:42.657902] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:29.238 [2024-07-15 10:02:42.662186] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:29.238 [2024-07-15 10:02:42.662209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:29.238 2024/07/15 10:02:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:29.238 [2024-07-15 10:02:42.674152] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:29.238 [2024-07-15 10:02:42.674168] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:29.238 2024/07/15 10:02:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:29.238 [2024-07-15 10:02:42.686134] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:29.238 [2024-07-15 10:02:42.686147] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:29.238 2024/07/15 10:02:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:29.238 [2024-07-15 10:02:42.698114] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:29.238 [2024-07-15 10:02:42.698127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:29.238 2024/07/15 10:02:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:29.238 [2024-07-15 10:02:42.710105] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:29.239 [2024-07-15 10:02:42.710118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:29.239 2024/07/15 10:02:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:29.239 [2024-07-15 10:02:42.722078] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:29.239 [2024-07-15 10:02:42.722094] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:29.239 2024/07/15 10:02:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:29.239 [2024-07-15 10:02:42.734050] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:29.239 [2024-07-15 10:02:42.734062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:29.239 2024/07/15 10:02:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:29.239 [2024-07-15 10:02:42.746032] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:29.239 [2024-07-15 10:02:42.746044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:29.239 2024/07/15 10:02:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:29.239 [2024-07-15 10:02:42.758009] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:29.239 [2024-07-15 10:02:42.758021] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:29.239 [2024-07-15 10:02:42.760302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:29.239 2024/07/15 10:02:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:29.239 [2024-07-15 10:02:42.769998] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:29.239 [2024-07-15 10:02:42.770019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:29.239 2024/07/15 10:02:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:29.239 [2024-07-15 10:02:42.781987] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:29.239 [2024-07-15 10:02:42.782007] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:29.239 2024/07/15 10:02:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:29.239 [2024-07-15 10:02:42.793957] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:29.239 [2024-07-15 10:02:42.793973] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:29.239 2024/07/15 10:02:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:29.239 [2024-07-15 10:02:42.805932] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:29.239 [2024-07-15 10:02:42.805949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:29.239 2024/07/15 10:02:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:29.239 [2024-07-15 10:02:42.817929] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:29.239 [2024-07-15 10:02:42.817949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:29.499 2024/07/15 10:02:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:29.499 [2024-07-15 10:02:42.829889] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:29.499 [2024-07-15 10:02:42.829903] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:29.499 2024/07/15 10:02:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:29.499 [2024-07-15 10:02:42.841869] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:29.499 [2024-07-15 10:02:42.841882] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:29.499 2024/07/15 10:02:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:29.499 [2024-07-15 10:02:42.853897] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:29.499 [2024-07-15 10:02:42.853929] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:29.499 2024/07/15 10:02:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:29.499 [2024-07-15 10:02:42.865850] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:29.499 [2024-07-15 10:02:42.865873] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:29.499 2024/07/15 10:02:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:29.499 [2024-07-15 10:02:42.877818] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:29.499 [2024-07-15 10:02:42.877837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:29.499 2024/07/15 10:02:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:29.499 [2024-07-15 10:02:42.889797] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:29.499 [2024-07-15 10:02:42.889815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:29.499 2024/07/15 10:02:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:29.499 [2024-07-15 10:02:42.901771] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:29.499 [2024-07-15 10:02:42.901784] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:29.499 2024/07/15 10:02:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:29.499 [2024-07-15 10:02:42.913778] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:29.499 [2024-07-15 10:02:42.913820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:29.499 Running I/O for 5 seconds... 00:21:29.499 2024/07/15 10:02:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:29.499 [2024-07-15 10:02:42.928744] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:29.499 [2024-07-15 10:02:42.928769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:29.499 2024/07/15 10:02:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:29.499 [2024-07-15 10:02:42.943848] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:29.499 [2024-07-15 10:02:42.943874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:29.499 2024/07/15 10:02:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:29.499 [2024-07-15 10:02:42.958126] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:29.499 [2024-07-15 10:02:42.958152] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:29.499 2024/07/15 10:02:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:29.499 [2024-07-15 10:02:42.970292] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:29.499 [2024-07-15 10:02:42.970317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:29.499 2024/07/15 10:02:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:29.499 [2024-07-15 10:02:42.985722] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:29.499 [2024-07-15 10:02:42.985747] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:29.499 2024/07/15 10:02:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:29.499 [2024-07-15 10:02:43.002086] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:29.499 [2024-07-15 10:02:43.002133] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:29.499 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:29.499 [2024-07-15 10:02:43.013164] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:29.499 [2024-07-15 10:02:43.013193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:29.499 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:29.499 [2024-07-15 10:02:43.028578] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:29.499 [2024-07-15 10:02:43.028611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:29.499 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:29.499 [2024-07-15 10:02:43.044667] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:29.499 [2024-07-15 10:02:43.044706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:29.499 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:29.499 [2024-07-15 10:02:43.059120] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:29.499 [2024-07-15 10:02:43.059147] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:29.499 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:29.499 [2024-07-15 10:02:43.070563] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:29.500 [2024-07-15 10:02:43.070589] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:29.500 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:29.500 [2024-07-15 10:02:43.080153] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:29.500 [2024-07-15 10:02:43.080179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:29.760 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:29.760 [2024-07-15 10:02:43.096100] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:29.760 [2024-07-15 10:02:43.096124] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:29.760 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:29.760 [2024-07-15 10:02:43.111465] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:29.760 [2024-07-15 10:02:43.111495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:29.760 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:29.760 [2024-07-15 10:02:43.121710] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:29.760 [2024-07-15 10:02:43.121735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:29.760 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:29.760 [2024-07-15 10:02:43.130288] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:29.760 [2024-07-15 10:02:43.130317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:29.760 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:29.760 [2024-07-15 10:02:43.140424] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:29.760 [2024-07-15 10:02:43.140450] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:29.760 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:29.760 [2024-07-15 10:02:43.149915] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:29.760 [2024-07-15 10:02:43.149944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:29.760 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:29.760 [2024-07-15 10:02:43.165440] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:29.760 [2024-07-15 10:02:43.165473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:29.760 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:29.760 [2024-07-15 10:02:43.181220] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:29.760 [2024-07-15 10:02:43.181252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:29.760 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:29.760 [2024-07-15 10:02:43.195915] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:29.760 [2024-07-15 10:02:43.195942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:29.760 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:29.760 [2024-07-15 10:02:43.210250] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:29.760 [2024-07-15 10:02:43.210277] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:29.760 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:29.760 [2024-07-15 10:02:43.224785] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:29.760 [2024-07-15 10:02:43.224813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:29.760 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:29.760 [2024-07-15 10:02:43.236410] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:29.760 [2024-07-15 10:02:43.236440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:29.760 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:29.760 [2024-07-15 10:02:43.251666] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:29.760 [2024-07-15 10:02:43.251701] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:29.760 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:29.760 [2024-07-15 10:02:43.267531] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:29.760 [2024-07-15 10:02:43.267558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:29.760 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:29.760 [2024-07-15 10:02:43.282380] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:29.760 [2024-07-15 10:02:43.282405] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:29.760 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:29.760 [2024-07-15 10:02:43.297970] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:29.760 [2024-07-15 10:02:43.298003] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:29.760 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:29.760 [2024-07-15 10:02:43.312524] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:29.760 [2024-07-15 10:02:43.312549] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:29.760 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:29.760 [2024-07-15 10:02:43.326869] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:29.760 [2024-07-15 10:02:43.326895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:29.760 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:29.760 [2024-07-15 10:02:43.341407] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:29.760 [2024-07-15 10:02:43.341433] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.019 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.019 [2024-07-15 10:02:43.355561] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.019 [2024-07-15 10:02:43.355586] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.019 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.019 [2024-07-15 10:02:43.369694] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.019 [2024-07-15 10:02:43.369717] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.019 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.019 [2024-07-15 10:02:43.380279] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.019 [2024-07-15 10:02:43.380303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.019 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.020 [2024-07-15 10:02:43.394582] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.020 [2024-07-15 10:02:43.394616] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.020 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.020 [2024-07-15 10:02:43.405930] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.020 [2024-07-15 10:02:43.405964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.020 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.020 [2024-07-15 10:02:43.422223] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.020 [2024-07-15 10:02:43.422251] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.020 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.020 [2024-07-15 10:02:43.437292] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.020 [2024-07-15 10:02:43.437320] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.020 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.020 [2024-07-15 10:02:43.452719] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.020 [2024-07-15 10:02:43.452747] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.020 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.020 [2024-07-15 10:02:43.468382] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.020 [2024-07-15 10:02:43.468412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.020 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.020 [2024-07-15 10:02:43.483061] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.020 [2024-07-15 10:02:43.483087] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.020 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.020 [2024-07-15 10:02:43.494564] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.020 [2024-07-15 10:02:43.494589] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.020 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.020 [2024-07-15 10:02:43.509659] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.020 [2024-07-15 10:02:43.509708] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.020 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.020 [2024-07-15 10:02:43.520308] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.020 [2024-07-15 10:02:43.520342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.020 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.020 [2024-07-15 10:02:43.535538] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.020 [2024-07-15 10:02:43.535562] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.020 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.020 [2024-07-15 10:02:43.550761] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.020 [2024-07-15 10:02:43.550783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.020 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.020 [2024-07-15 10:02:43.564783] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.020 [2024-07-15 10:02:43.564805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.020 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.020 [2024-07-15 10:02:43.579037] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.020 [2024-07-15 10:02:43.579073] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.020 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.020 [2024-07-15 10:02:43.592995] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.020 [2024-07-15 10:02:43.593033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.020 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.280 [2024-07-15 10:02:43.607752] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.280 [2024-07-15 10:02:43.607777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.280 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.280 [2024-07-15 10:02:43.622144] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.280 [2024-07-15 10:02:43.622185] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.280 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.280 [2024-07-15 10:02:43.636474] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.280 [2024-07-15 10:02:43.636501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.280 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.280 [2024-07-15 10:02:43.651005] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.280 [2024-07-15 10:02:43.651030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.280 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.280 [2024-07-15 10:02:43.661843] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.280 [2024-07-15 10:02:43.661867] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.281 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.281 [2024-07-15 10:02:43.677585] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.281 [2024-07-15 10:02:43.677615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.281 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.281 [2024-07-15 10:02:43.693987] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.281 [2024-07-15 10:02:43.694012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.281 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.281 [2024-07-15 10:02:43.705126] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.281 [2024-07-15 10:02:43.705152] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.281 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.281 [2024-07-15 10:02:43.720637] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.281 [2024-07-15 10:02:43.720674] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.281 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.281 [2024-07-15 10:02:43.736097] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.281 [2024-07-15 10:02:43.736148] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.281 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.281 [2024-07-15 10:02:43.750696] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.281 [2024-07-15 10:02:43.750726] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.281 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.281 [2024-07-15 10:02:43.766235] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.281 [2024-07-15 10:02:43.766266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.281 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.281 [2024-07-15 10:02:43.780718] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.281 [2024-07-15 10:02:43.780745] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.281 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.281 [2024-07-15 10:02:43.794452] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.281 [2024-07-15 10:02:43.794478] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.281 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.281 [2024-07-15 10:02:43.808359] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.281 [2024-07-15 10:02:43.808381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.281 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.281 [2024-07-15 10:02:43.822486] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.281 [2024-07-15 10:02:43.822510] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.281 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.281 [2024-07-15 10:02:43.836115] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.281 [2024-07-15 10:02:43.836140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.281 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.281 [2024-07-15 10:02:43.850290] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.281 [2024-07-15 10:02:43.850316] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.281 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.540 [2024-07-15 10:02:43.865000] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.540 [2024-07-15 10:02:43.865044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.540 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.540 [2024-07-15 10:02:43.881062] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.540 [2024-07-15 10:02:43.881101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.540 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.540 [2024-07-15 10:02:43.892728] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.540 [2024-07-15 10:02:43.892757] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.540 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.540 [2024-07-15 10:02:43.908100] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.540 [2024-07-15 10:02:43.908129] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.540 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.540 [2024-07-15 10:02:43.919382] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.540 [2024-07-15 10:02:43.919407] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.540 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.540 [2024-07-15 10:02:43.934966] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.540 [2024-07-15 10:02:43.934996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.540 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.540 [2024-07-15 10:02:43.950200] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.540 [2024-07-15 10:02:43.950231] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.540 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.540 [2024-07-15 10:02:43.965464] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.540 [2024-07-15 10:02:43.965515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.540 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.540 [2024-07-15 10:02:43.981030] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.540 [2024-07-15 10:02:43.981062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.540 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.540 [2024-07-15 10:02:43.995430] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.540 [2024-07-15 10:02:43.995461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.541 2024/07/15 10:02:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.541 [2024-07-15 10:02:44.009594] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.541 [2024-07-15 10:02:44.009625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.541 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.541 [2024-07-15 10:02:44.023518] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.541 [2024-07-15 10:02:44.023549] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.541 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.541 [2024-07-15 10:02:44.038783] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.541 [2024-07-15 10:02:44.038811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.541 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.541 [2024-07-15 10:02:44.049375] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.541 [2024-07-15 10:02:44.049404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.541 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.541 [2024-07-15 10:02:44.063812] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.541 [2024-07-15 10:02:44.063838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.541 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.541 [2024-07-15 10:02:44.077660] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.541 [2024-07-15 10:02:44.077694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.541 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.541 [2024-07-15 10:02:44.091865] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.541 [2024-07-15 10:02:44.091891] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.541 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.541 [2024-07-15 10:02:44.105830] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.541 [2024-07-15 10:02:44.105856] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.541 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.541 [2024-07-15 10:02:44.120244] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.541 [2024-07-15 10:02:44.120287] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.541 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.800 [2024-07-15 10:02:44.131447] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.800 [2024-07-15 10:02:44.131485] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.800 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.800 [2024-07-15 10:02:44.146836] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.800 [2024-07-15 10:02:44.146866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.800 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.800 [2024-07-15 10:02:44.162057] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.800 [2024-07-15 10:02:44.162082] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.800 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.800 [2024-07-15 10:02:44.177617] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.800 [2024-07-15 10:02:44.177648] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.800 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.800 [2024-07-15 10:02:44.193582] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.800 [2024-07-15 10:02:44.193631] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.800 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.800 [2024-07-15 10:02:44.208128] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.800 [2024-07-15 10:02:44.208168] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.800 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.800 [2024-07-15 10:02:44.222367] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.800 [2024-07-15 10:02:44.222400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.800 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.800 [2024-07-15 10:02:44.236462] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.800 [2024-07-15 10:02:44.236493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.800 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.800 [2024-07-15 10:02:44.250760] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.800 [2024-07-15 10:02:44.250788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.800 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.800 [2024-07-15 10:02:44.264519] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.800 [2024-07-15 10:02:44.264547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.800 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.800 [2024-07-15 10:02:44.278833] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.800 [2024-07-15 10:02:44.278862] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.800 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.800 [2024-07-15 10:02:44.292836] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.800 [2024-07-15 10:02:44.292865] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.800 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.800 [2024-07-15 10:02:44.307590] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.800 [2024-07-15 10:02:44.307620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.800 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.800 [2024-07-15 10:02:44.323518] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.800 [2024-07-15 10:02:44.323564] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.800 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.800 [2024-07-15 10:02:44.338623] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.800 [2024-07-15 10:02:44.338682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.800 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.800 [2024-07-15 10:02:44.354328] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.800 [2024-07-15 10:02:44.354367] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.800 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.800 [2024-07-15 10:02:44.368904] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.800 [2024-07-15 10:02:44.368938] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.800 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:30.800 [2024-07-15 10:02:44.380127] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:30.800 [2024-07-15 10:02:44.380158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:30.800 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.059 [2024-07-15 10:02:44.395310] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.059 [2024-07-15 10:02:44.395343] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.059 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.059 [2024-07-15 10:02:44.410175] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.059 [2024-07-15 10:02:44.410206] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.059 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.059 [2024-07-15 10:02:44.426483] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.059 [2024-07-15 10:02:44.426516] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.059 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.059 [2024-07-15 10:02:44.442830] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.059 [2024-07-15 10:02:44.442866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.059 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.059 [2024-07-15 10:02:44.454379] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.059 [2024-07-15 10:02:44.454414] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.059 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.059 [2024-07-15 10:02:44.469813] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.059 [2024-07-15 10:02:44.469843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.059 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.059 [2024-07-15 10:02:44.484695] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.059 [2024-07-15 10:02:44.484726] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.059 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.059 [2024-07-15 10:02:44.499511] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.059 [2024-07-15 10:02:44.499561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.059 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.059 [2024-07-15 10:02:44.514220] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.059 [2024-07-15 10:02:44.514248] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.059 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.059 [2024-07-15 10:02:44.528594] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.059 [2024-07-15 10:02:44.528624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.059 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.059 [2024-07-15 10:02:44.539180] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.059 [2024-07-15 10:02:44.539209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.059 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.059 [2024-07-15 10:02:44.554927] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.059 [2024-07-15 10:02:44.554954] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.059 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.059 [2024-07-15 10:02:44.571158] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.059 [2024-07-15 10:02:44.571191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.059 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.059 [2024-07-15 10:02:44.586760] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.059 [2024-07-15 10:02:44.586790] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.059 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.059 [2024-07-15 10:02:44.600889] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.059 [2024-07-15 10:02:44.600920] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.059 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.059 [2024-07-15 10:02:44.614924] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.059 [2024-07-15 10:02:44.614951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.059 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.059 [2024-07-15 10:02:44.629453] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.060 [2024-07-15 10:02:44.629484] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.060 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.319 [2024-07-15 10:02:44.643872] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.319 [2024-07-15 10:02:44.643907] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.319 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.319 [2024-07-15 10:02:44.657644] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.319 [2024-07-15 10:02:44.657681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.319 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.319 [2024-07-15 10:02:44.672575] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.319 [2024-07-15 10:02:44.672607] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.319 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.319 [2024-07-15 10:02:44.686919] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.319 [2024-07-15 10:02:44.686948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.319 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.319 [2024-07-15 10:02:44.700703] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.319 [2024-07-15 10:02:44.700734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.319 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.319 [2024-07-15 10:02:44.715390] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.319 [2024-07-15 10:02:44.715420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.319 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.319 [2024-07-15 10:02:44.729921] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.319 [2024-07-15 10:02:44.729951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.319 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.319 [2024-07-15 10:02:44.744302] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.319 [2024-07-15 10:02:44.744340] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.320 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.320 [2024-07-15 10:02:44.759436] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.320 [2024-07-15 10:02:44.759466] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.320 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.320 [2024-07-15 10:02:44.770933] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.320 [2024-07-15 10:02:44.770969] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.320 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.320 [2024-07-15 10:02:44.785975] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.320 [2024-07-15 10:02:44.786008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.320 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.320 [2024-07-15 10:02:44.800932] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.320 [2024-07-15 10:02:44.800964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.320 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.320 [2024-07-15 10:02:44.816788] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.320 [2024-07-15 10:02:44.816820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.320 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.320 [2024-07-15 10:02:44.831262] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.320 [2024-07-15 10:02:44.831293] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.320 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.320 [2024-07-15 10:02:44.842742] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.320 [2024-07-15 10:02:44.842769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.320 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.320 [2024-07-15 10:02:44.857639] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.320 [2024-07-15 10:02:44.857687] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.320 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.320 [2024-07-15 10:02:44.872219] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.320 [2024-07-15 10:02:44.872260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.320 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.320 [2024-07-15 10:02:44.886316] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.320 [2024-07-15 10:02:44.886346] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.320 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.320 [2024-07-15 10:02:44.900289] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.320 [2024-07-15 10:02:44.900343] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.580 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.580 [2024-07-15 10:02:44.916007] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.580 [2024-07-15 10:02:44.916040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.580 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.580 [2024-07-15 10:02:44.931291] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.580 [2024-07-15 10:02:44.931319] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.580 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.580 [2024-07-15 10:02:44.945764] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.580 [2024-07-15 10:02:44.945791] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.580 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.580 [2024-07-15 10:02:44.959407] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.580 [2024-07-15 10:02:44.959446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.580 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.580 [2024-07-15 10:02:44.974086] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.580 [2024-07-15 10:02:44.974115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.580 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.580 [2024-07-15 10:02:44.988815] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.580 [2024-07-15 10:02:44.988843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.580 2024/07/15 10:02:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.580 [2024-07-15 10:02:45.003399] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.580 [2024-07-15 10:02:45.003427] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.580 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.580 [2024-07-15 10:02:45.017849] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.580 [2024-07-15 10:02:45.017888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.580 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.580 [2024-07-15 10:02:45.031986] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.580 [2024-07-15 10:02:45.032014] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.580 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.580 [2024-07-15 10:02:45.046166] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.580 [2024-07-15 10:02:45.046195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.580 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.580 [2024-07-15 10:02:45.060346] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.580 [2024-07-15 10:02:45.060377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.580 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.580 [2024-07-15 10:02:45.075262] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.580 [2024-07-15 10:02:45.075292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.580 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.580 [2024-07-15 10:02:45.089921] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.580 [2024-07-15 10:02:45.089948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.580 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.580 [2024-07-15 10:02:45.104280] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.580 [2024-07-15 10:02:45.104312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.580 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.580 [2024-07-15 10:02:45.118475] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.580 [2024-07-15 10:02:45.118507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.580 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.580 [2024-07-15 10:02:45.132430] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.580 [2024-07-15 10:02:45.132465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.580 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.580 [2024-07-15 10:02:45.147461] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.580 [2024-07-15 10:02:45.147493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.580 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.580 [2024-07-15 10:02:45.162823] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.580 [2024-07-15 10:02:45.162856] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.840 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.840 [2024-07-15 10:02:45.179088] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.840 [2024-07-15 10:02:45.179119] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.840 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.840 [2024-07-15 10:02:45.193555] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.840 [2024-07-15 10:02:45.193589] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.840 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.840 [2024-07-15 10:02:45.205239] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.840 [2024-07-15 10:02:45.205273] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.840 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.840 [2024-07-15 10:02:45.220044] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.840 [2024-07-15 10:02:45.220079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.840 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.840 [2024-07-15 10:02:45.231885] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.840 [2024-07-15 10:02:45.231913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.840 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.840 [2024-07-15 10:02:45.248434] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.840 [2024-07-15 10:02:45.248466] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.840 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.840 [2024-07-15 10:02:45.264338] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.840 [2024-07-15 10:02:45.264396] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.840 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.840 [2024-07-15 10:02:45.278562] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.840 [2024-07-15 10:02:45.278590] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.840 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.840 [2024-07-15 10:02:45.292757] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.840 [2024-07-15 10:02:45.292786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.840 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.840 [2024-07-15 10:02:45.306834] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.840 [2024-07-15 10:02:45.306859] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.840 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.840 [2024-07-15 10:02:45.321858] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.840 [2024-07-15 10:02:45.321886] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.840 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.840 [2024-07-15 10:02:45.333273] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.840 [2024-07-15 10:02:45.333300] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.840 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.840 [2024-07-15 10:02:45.347694] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.840 [2024-07-15 10:02:45.347718] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.840 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.840 [2024-07-15 10:02:45.361081] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.840 [2024-07-15 10:02:45.361109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.840 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.840 [2024-07-15 10:02:45.374887] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.840 [2024-07-15 10:02:45.374913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.840 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.840 [2024-07-15 10:02:45.388736] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.840 [2024-07-15 10:02:45.388764] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.840 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.840 [2024-07-15 10:02:45.403024] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.840 [2024-07-15 10:02:45.403051] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.840 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:31.840 [2024-07-15 10:02:45.416973] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:31.840 [2024-07-15 10:02:45.417001] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:31.840 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.100 [2024-07-15 10:02:45.430756] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.100 [2024-07-15 10:02:45.430783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.100 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.100 [2024-07-15 10:02:45.445325] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.100 [2024-07-15 10:02:45.445363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.100 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.100 [2024-07-15 10:02:45.460539] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.100 [2024-07-15 10:02:45.460570] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.100 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.101 [2024-07-15 10:02:45.475736] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.101 [2024-07-15 10:02:45.475768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.101 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.101 [2024-07-15 10:02:45.489746] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.101 [2024-07-15 10:02:45.489774] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.101 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.101 [2024-07-15 10:02:45.504873] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.101 [2024-07-15 10:02:45.504903] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.101 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.101 [2024-07-15 10:02:45.519718] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.101 [2024-07-15 10:02:45.519747] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.101 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.101 [2024-07-15 10:02:45.534190] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.101 [2024-07-15 10:02:45.534221] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.101 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.101 [2024-07-15 10:02:45.548133] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.101 [2024-07-15 10:02:45.548163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.101 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.101 [2024-07-15 10:02:45.562020] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.101 [2024-07-15 10:02:45.562045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.101 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.101 [2024-07-15 10:02:45.575601] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.101 [2024-07-15 10:02:45.575630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.101 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.101 [2024-07-15 10:02:45.589591] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.101 [2024-07-15 10:02:45.589620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.101 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.101 [2024-07-15 10:02:45.603946] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.101 [2024-07-15 10:02:45.603973] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.101 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.101 [2024-07-15 10:02:45.618692] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.101 [2024-07-15 10:02:45.618720] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.101 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.101 [2024-07-15 10:02:45.629300] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.101 [2024-07-15 10:02:45.629331] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.101 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.101 [2024-07-15 10:02:45.644131] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.101 [2024-07-15 10:02:45.644164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.101 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.101 [2024-07-15 10:02:45.659051] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.101 [2024-07-15 10:02:45.659086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.101 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.101 [2024-07-15 10:02:45.670532] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.101 [2024-07-15 10:02:45.670563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.101 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.361 [2024-07-15 10:02:45.685943] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.361 [2024-07-15 10:02:45.685972] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.361 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.361 [2024-07-15 10:02:45.701746] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.361 [2024-07-15 10:02:45.701775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.361 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.361 [2024-07-15 10:02:45.716534] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.361 [2024-07-15 10:02:45.716564] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.361 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.361 [2024-07-15 10:02:45.727534] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.361 [2024-07-15 10:02:45.727564] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.361 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.361 [2024-07-15 10:02:45.743149] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.361 [2024-07-15 10:02:45.743181] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.361 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.361 [2024-07-15 10:02:45.759057] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.361 [2024-07-15 10:02:45.759099] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.361 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.361 [2024-07-15 10:02:45.770570] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.361 [2024-07-15 10:02:45.770600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.361 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.361 [2024-07-15 10:02:45.786264] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.361 [2024-07-15 10:02:45.786294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.361 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.361 [2024-07-15 10:02:45.800979] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.361 [2024-07-15 10:02:45.801010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.361 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.361 [2024-07-15 10:02:45.812478] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.361 [2024-07-15 10:02:45.812506] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.361 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.361 [2024-07-15 10:02:45.827552] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.361 [2024-07-15 10:02:45.827582] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.361 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.361 [2024-07-15 10:02:45.838624] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.361 [2024-07-15 10:02:45.838654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.361 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.361 [2024-07-15 10:02:45.854254] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.361 [2024-07-15 10:02:45.854285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.361 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.361 [2024-07-15 10:02:45.871384] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.361 [2024-07-15 10:02:45.871420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.361 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.361 [2024-07-15 10:02:45.887776] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.361 [2024-07-15 10:02:45.887804] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.361 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.361 [2024-07-15 10:02:45.904074] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.361 [2024-07-15 10:02:45.904103] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.361 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.361 [2024-07-15 10:02:45.919999] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.361 [2024-07-15 10:02:45.920033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.362 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.362 [2024-07-15 10:02:45.931601] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.362 [2024-07-15 10:02:45.931631] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.362 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.362 [2024-07-15 10:02:45.942048] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.362 [2024-07-15 10:02:45.942082] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.622 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.622 [2024-07-15 10:02:45.950413] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.622 [2024-07-15 10:02:45.950442] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.622 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.622 [2024-07-15 10:02:45.965366] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.622 [2024-07-15 10:02:45.965396] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.622 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.622 [2024-07-15 10:02:45.976534] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.622 [2024-07-15 10:02:45.976566] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.622 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.622 [2024-07-15 10:02:45.991512] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.622 [2024-07-15 10:02:45.991539] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.622 2024/07/15 10:02:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.622 [2024-07-15 10:02:46.002453] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.622 [2024-07-15 10:02:46.002480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.622 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.622 [2024-07-15 10:02:46.016791] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.622 [2024-07-15 10:02:46.016819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.622 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.622 [2024-07-15 10:02:46.030541] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.622 [2024-07-15 10:02:46.030573] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.622 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.622 [2024-07-15 10:02:46.044534] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.622 [2024-07-15 10:02:46.044567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.622 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.622 [2024-07-15 10:02:46.058946] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.622 [2024-07-15 10:02:46.058974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.622 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.622 [2024-07-15 10:02:46.074214] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.622 [2024-07-15 10:02:46.074243] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.622 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.622 [2024-07-15 10:02:46.089488] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.622 [2024-07-15 10:02:46.089529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.622 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.622 [2024-07-15 10:02:46.103900] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.622 [2024-07-15 10:02:46.103929] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.622 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.622 [2024-07-15 10:02:46.117668] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.622 [2024-07-15 10:02:46.117703] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.622 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.622 [2024-07-15 10:02:46.132505] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.622 [2024-07-15 10:02:46.132534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.622 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.622 [2024-07-15 10:02:46.143700] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.622 [2024-07-15 10:02:46.143726] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.622 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.622 [2024-07-15 10:02:46.158739] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.622 [2024-07-15 10:02:46.158764] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.622 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.622 [2024-07-15 10:02:46.169420] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.622 [2024-07-15 10:02:46.169449] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.622 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.622 [2024-07-15 10:02:46.184286] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.622 [2024-07-15 10:02:46.184313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.622 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.622 [2024-07-15 10:02:46.198473] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.622 [2024-07-15 10:02:46.198501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.622 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.883 [2024-07-15 10:02:46.212671] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.883 [2024-07-15 10:02:46.212707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.883 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.883 [2024-07-15 10:02:46.227429] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.883 [2024-07-15 10:02:46.227458] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.883 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.883 [2024-07-15 10:02:46.238452] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.883 [2024-07-15 10:02:46.238480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.883 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.883 [2024-07-15 10:02:46.253596] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.883 [2024-07-15 10:02:46.253628] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.883 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.883 [2024-07-15 10:02:46.268164] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.883 [2024-07-15 10:02:46.268195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.883 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.883 [2024-07-15 10:02:46.279926] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.883 [2024-07-15 10:02:46.279957] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.883 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.883 [2024-07-15 10:02:46.295500] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.884 [2024-07-15 10:02:46.295534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.884 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.884 [2024-07-15 10:02:46.312116] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.884 [2024-07-15 10:02:46.312147] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.884 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.884 [2024-07-15 10:02:46.328056] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.884 [2024-07-15 10:02:46.328097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.884 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.884 [2024-07-15 10:02:46.343931] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.884 [2024-07-15 10:02:46.343981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.884 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.884 [2024-07-15 10:02:46.359237] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.884 [2024-07-15 10:02:46.359269] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.884 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.884 [2024-07-15 10:02:46.373890] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.884 [2024-07-15 10:02:46.373919] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.884 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.884 [2024-07-15 10:02:46.388344] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.884 [2024-07-15 10:02:46.388375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.884 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.884 [2024-07-15 10:02:46.399087] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.884 [2024-07-15 10:02:46.399115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.884 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.884 [2024-07-15 10:02:46.414094] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.884 [2024-07-15 10:02:46.414123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.884 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.884 [2024-07-15 10:02:46.427929] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.884 [2024-07-15 10:02:46.427956] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.884 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.884 [2024-07-15 10:02:46.442816] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.884 [2024-07-15 10:02:46.442844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.884 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:32.884 [2024-07-15 10:02:46.458745] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:32.884 [2024-07-15 10:02:46.458777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:32.884 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.145 [2024-07-15 10:02:46.473919] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.145 [2024-07-15 10:02:46.473954] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.145 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.145 [2024-07-15 10:02:46.489948] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.145 [2024-07-15 10:02:46.489982] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.145 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.145 [2024-07-15 10:02:46.503531] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.145 [2024-07-15 10:02:46.503563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.145 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.145 [2024-07-15 10:02:46.518873] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.145 [2024-07-15 10:02:46.518908] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.145 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.145 [2024-07-15 10:02:46.533856] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.145 [2024-07-15 10:02:46.533887] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.145 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.145 [2024-07-15 10:02:46.545203] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.145 [2024-07-15 10:02:46.545237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.145 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.145 [2024-07-15 10:02:46.560556] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.145 [2024-07-15 10:02:46.560592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.145 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.145 [2024-07-15 10:02:46.576426] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.145 [2024-07-15 10:02:46.576472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.145 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.145 [2024-07-15 10:02:46.591442] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.145 [2024-07-15 10:02:46.591476] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.145 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.145 [2024-07-15 10:02:46.602548] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.145 [2024-07-15 10:02:46.602579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.145 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.145 [2024-07-15 10:02:46.618417] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.145 [2024-07-15 10:02:46.618452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.145 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.145 [2024-07-15 10:02:46.634437] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.145 [2024-07-15 10:02:46.634467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.145 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.145 [2024-07-15 10:02:46.649315] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.145 [2024-07-15 10:02:46.649345] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.145 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.145 [2024-07-15 10:02:46.663797] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.145 [2024-07-15 10:02:46.663826] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.145 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.145 [2024-07-15 10:02:46.675123] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.145 [2024-07-15 10:02:46.675158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.145 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.145 [2024-07-15 10:02:46.691237] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.145 [2024-07-15 10:02:46.691285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.145 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.145 [2024-07-15 10:02:46.706798] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.145 [2024-07-15 10:02:46.706831] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.145 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.145 [2024-07-15 10:02:46.721212] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.145 [2024-07-15 10:02:46.721247] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.145 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.405 [2024-07-15 10:02:46.736062] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.405 [2024-07-15 10:02:46.736095] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.405 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.405 [2024-07-15 10:02:46.747471] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.405 [2024-07-15 10:02:46.747505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.405 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.405 [2024-07-15 10:02:46.762849] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.405 [2024-07-15 10:02:46.762896] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.405 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.405 [2024-07-15 10:02:46.778769] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.405 [2024-07-15 10:02:46.778815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.405 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.405 [2024-07-15 10:02:46.793539] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.405 [2024-07-15 10:02:46.793578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.405 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.405 [2024-07-15 10:02:46.809445] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.405 [2024-07-15 10:02:46.809485] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.405 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.405 [2024-07-15 10:02:46.824964] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.405 [2024-07-15 10:02:46.825001] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.406 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.406 [2024-07-15 10:02:46.840456] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.406 [2024-07-15 10:02:46.840491] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.406 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.406 [2024-07-15 10:02:46.856272] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.406 [2024-07-15 10:02:46.856305] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.406 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.406 [2024-07-15 10:02:46.871078] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.406 [2024-07-15 10:02:46.871106] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.406 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.406 [2024-07-15 10:02:46.882352] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.406 [2024-07-15 10:02:46.882383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.406 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.406 [2024-07-15 10:02:46.898132] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.406 [2024-07-15 10:02:46.898164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.406 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.406 [2024-07-15 10:02:46.914424] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.406 [2024-07-15 10:02:46.914457] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.406 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.406 [2024-07-15 10:02:46.926534] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.406 [2024-07-15 10:02:46.926582] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.406 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.406 [2024-07-15 10:02:46.942189] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.406 [2024-07-15 10:02:46.942232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.406 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.406 [2024-07-15 10:02:46.958601] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.406 [2024-07-15 10:02:46.958655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.406 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.406 [2024-07-15 10:02:46.975871] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.406 [2024-07-15 10:02:46.975915] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.406 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.665 [2024-07-15 10:02:46.992235] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.665 [2024-07-15 10:02:46.992273] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.665 2024/07/15 10:02:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.665 [2024-07-15 10:02:47.008633] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.665 [2024-07-15 10:02:47.008682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.665 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.665 [2024-07-15 10:02:47.020159] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.665 [2024-07-15 10:02:47.020195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.665 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.665 [2024-07-15 10:02:47.035326] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.665 [2024-07-15 10:02:47.035363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.665 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.665 [2024-07-15 10:02:47.051540] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.665 [2024-07-15 10:02:47.051579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.665 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.665 [2024-07-15 10:02:47.067923] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.665 [2024-07-15 10:02:47.067972] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.665 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.665 [2024-07-15 10:02:47.084396] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.665 [2024-07-15 10:02:47.084439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.665 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.666 [2024-07-15 10:02:47.095542] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.666 [2024-07-15 10:02:47.095579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.666 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.666 [2024-07-15 10:02:47.110974] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.666 [2024-07-15 10:02:47.111008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.666 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.666 [2024-07-15 10:02:47.122115] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.666 [2024-07-15 10:02:47.122147] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.666 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.666 [2024-07-15 10:02:47.136845] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.666 [2024-07-15 10:02:47.136880] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.666 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.666 [2024-07-15 10:02:47.150092] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.666 [2024-07-15 10:02:47.150125] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.666 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.666 [2024-07-15 10:02:47.165391] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.666 [2024-07-15 10:02:47.165443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.666 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.666 [2024-07-15 10:02:47.181227] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.666 [2024-07-15 10:02:47.181273] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.666 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.666 [2024-07-15 10:02:47.195315] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.666 [2024-07-15 10:02:47.195349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.666 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.666 [2024-07-15 10:02:47.210125] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.666 [2024-07-15 10:02:47.210158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.666 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.666 [2024-07-15 10:02:47.225147] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.666 [2024-07-15 10:02:47.225184] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.666 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.666 [2024-07-15 10:02:47.236871] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.666 [2024-07-15 10:02:47.236904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.666 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.926 [2024-07-15 10:02:47.252821] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.926 [2024-07-15 10:02:47.252858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.926 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.926 [2024-07-15 10:02:47.268387] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.926 [2024-07-15 10:02:47.268423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.926 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.926 [2024-07-15 10:02:47.282188] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.926 [2024-07-15 10:02:47.282218] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.926 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.926 [2024-07-15 10:02:47.297487] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.926 [2024-07-15 10:02:47.297519] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.926 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.926 [2024-07-15 10:02:47.312970] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.926 [2024-07-15 10:02:47.313000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.926 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.926 [2024-07-15 10:02:47.327476] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.926 [2024-07-15 10:02:47.327512] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.926 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.926 [2024-07-15 10:02:47.342778] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.926 [2024-07-15 10:02:47.342822] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.926 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.926 [2024-07-15 10:02:47.358325] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.926 [2024-07-15 10:02:47.358360] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.926 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.926 [2024-07-15 10:02:47.372851] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.926 [2024-07-15 10:02:47.372885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.926 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.926 [2024-07-15 10:02:47.387017] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.926 [2024-07-15 10:02:47.387050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.926 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.926 [2024-07-15 10:02:47.398409] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.926 [2024-07-15 10:02:47.398448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.926 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.926 [2024-07-15 10:02:47.414001] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.926 [2024-07-15 10:02:47.414050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.926 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.926 [2024-07-15 10:02:47.430118] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.926 [2024-07-15 10:02:47.430151] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.926 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.926 [2024-07-15 10:02:47.441086] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.926 [2024-07-15 10:02:47.441116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.926 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.926 [2024-07-15 10:02:47.456067] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.926 [2024-07-15 10:02:47.456097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.926 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.926 [2024-07-15 10:02:47.470518] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.926 [2024-07-15 10:02:47.470547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.926 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.926 [2024-07-15 10:02:47.481467] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.926 [2024-07-15 10:02:47.481495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.926 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:33.926 [2024-07-15 10:02:47.497406] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:33.926 [2024-07-15 10:02:47.497441] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:33.926 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:34.186 [2024-07-15 10:02:47.513904] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:34.186 [2024-07-15 10:02:47.513948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:34.186 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:34.186 [2024-07-15 10:02:47.529266] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:34.186 [2024-07-15 10:02:47.529302] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:34.187 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:34.187 [2024-07-15 10:02:47.543633] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:34.187 [2024-07-15 10:02:47.543706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:34.187 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:34.187 [2024-07-15 10:02:47.555069] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:34.187 [2024-07-15 10:02:47.555111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:34.187 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:34.187 [2024-07-15 10:02:47.570606] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:34.187 [2024-07-15 10:02:47.570643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:34.187 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:34.187 [2024-07-15 10:02:47.586091] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:34.187 [2024-07-15 10:02:47.586125] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:34.187 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:34.187 [2024-07-15 10:02:47.600246] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:34.187 [2024-07-15 10:02:47.600278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:34.187 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:34.187 [2024-07-15 10:02:47.615458] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:34.187 [2024-07-15 10:02:47.615494] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:34.187 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:34.187 [2024-07-15 10:02:47.626420] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:34.187 [2024-07-15 10:02:47.626451] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:34.187 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:34.187 [2024-07-15 10:02:47.641122] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:34.187 [2024-07-15 10:02:47.641157] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:34.187 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:34.187 [2024-07-15 10:02:47.655306] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:34.187 [2024-07-15 10:02:47.655338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:34.187 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:34.187 [2024-07-15 10:02:47.670184] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:34.187 [2024-07-15 10:02:47.670215] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:34.187 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:34.187 [2024-07-15 10:02:47.685516] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:34.187 [2024-07-15 10:02:47.685550] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:34.187 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:34.187 [2024-07-15 10:02:47.700364] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:34.187 [2024-07-15 10:02:47.700398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:34.187 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:34.187 [2024-07-15 10:02:47.714257] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:34.187 [2024-07-15 10:02:47.714288] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:34.187 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:34.187 [2024-07-15 10:02:47.729906] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:34.187 [2024-07-15 10:02:47.729939] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:34.187 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:34.187 [2024-07-15 10:02:47.744529] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:34.187 [2024-07-15 10:02:47.744563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:34.187 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:34.187 [2024-07-15 10:02:47.759247] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:34.187 [2024-07-15 10:02:47.759281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:34.187 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:34.447 [2024-07-15 10:02:47.770569] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:34.447 [2024-07-15 10:02:47.770606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:34.447 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:34.447 [2024-07-15 10:02:47.786740] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:34.447 [2024-07-15 10:02:47.786774] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:34.447 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:34.447 [2024-07-15 10:02:47.802472] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:34.447 [2024-07-15 10:02:47.802507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:34.447 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:34.447 [2024-07-15 10:02:47.817970] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:34.447 [2024-07-15 10:02:47.818004] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:34.447 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:34.447 [2024-07-15 10:02:47.835036] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:34.447 [2024-07-15 10:02:47.835070] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:34.447 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:34.447 [2024-07-15 10:02:47.850686] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:34.447 [2024-07-15 10:02:47.850740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:34.447 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:34.447 [2024-07-15 10:02:47.864948] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:34.447 [2024-07-15 10:02:47.864992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:34.447 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:34.447 [2024-07-15 10:02:47.879692] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:34.447 [2024-07-15 10:02:47.879726] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:34.447 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:34.448 [2024-07-15 10:02:47.894232] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:34.448 [2024-07-15 10:02:47.894267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:34.448 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:34.448 [2024-07-15 10:02:47.909521] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:34.448 [2024-07-15 10:02:47.909555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:34.448 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:34.448 00:21:34.448 Latency(us) 00:21:34.448 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.448 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:21:34.448 Nvme1n1 : 5.01 15536.67 121.38 0.00 0.00 8230.50 3749.00 17743.37 00:21:34.448 =================================================================================================================== 00:21:34.448 Total : 15536.67 121.38 0.00 0.00 8230.50 3749.00 17743.37 00:21:34.448 [2024-07-15 10:02:47.921429] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:34.448 [2024-07-15 10:02:47.921455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:34.448 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:34.448 [2024-07-15 10:02:47.933412] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:34.448 [2024-07-15 10:02:47.933439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:34.448 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:34.448 [2024-07-15 10:02:47.945385] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:34.448 [2024-07-15 10:02:47.945409] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:34.448 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:34.448 [2024-07-15 10:02:47.957364] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:34.448 [2024-07-15 10:02:47.957388] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:34.448 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:34.448 [2024-07-15 10:02:47.969337] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:34.448 [2024-07-15 10:02:47.969361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:34.448 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:34.448 [2024-07-15 10:02:47.981327] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:34.448 [2024-07-15 10:02:47.981355] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:34.448 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:34.448 [2024-07-15 10:02:47.993327] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:34.448 [2024-07-15 10:02:47.993365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:34.448 2024/07/15 10:02:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:34.448 [2024-07-15 10:02:48.005284] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:34.448 [2024-07-15 10:02:48.005311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:34.448 2024/07/15 10:02:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:34.448 [2024-07-15 10:02:48.017253] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:34.448 [2024-07-15 10:02:48.017276] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:34.448 2024/07/15 10:02:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:34.448 [2024-07-15 10:02:48.029234] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:34.448 [2024-07-15 10:02:48.029254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:34.708 2024/07/15 10:02:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:34.708 [2024-07-15 10:02:48.041221] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:34.708 [2024-07-15 10:02:48.041246] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:34.708 2024/07/15 10:02:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:34.708 [2024-07-15 10:02:48.053228] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:34.708 [2024-07-15 10:02:48.053263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:34.708 2024/07/15 10:02:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:34.708 [2024-07-15 10:02:48.065225] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:34.708 [2024-07-15 10:02:48.065256] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:34.708 2024/07/15 10:02:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:34.708 [2024-07-15 10:02:48.077185] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:34.708 [2024-07-15 10:02:48.077216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:34.708 2024/07/15 10:02:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:34.708 [2024-07-15 10:02:48.089142] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:34.708 [2024-07-15 10:02:48.089166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:34.708 2024/07/15 10:02:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:34.708 [2024-07-15 10:02:48.101114] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:34.708 [2024-07-15 10:02:48.101136] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:34.708 2024/07/15 10:02:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:34.708 [2024-07-15 10:02:48.113117] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:34.708 [2024-07-15 10:02:48.113139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:34.708 2024/07/15 10:02:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:34.708 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (76399) - No such process 00:21:34.708 10:02:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 76399 00:21:34.708 10:02:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:34.708 10:02:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.708 10:02:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:34.708 10:02:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.708 10:02:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:21:34.708 10:02:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.708 10:02:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:34.708 delay0 00:21:34.708 10:02:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.708 10:02:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:21:34.708 10:02:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.708 10:02:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:34.708 10:02:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.708 10:02:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:21:34.967 [2024-07-15 10:02:48.329731] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:21:41.538 Initializing NVMe Controllers 00:21:41.538 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:41.538 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:41.538 Initialization complete. Launching workers. 00:21:41.538 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 107 00:21:41.538 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 394, failed to submit 33 00:21:41.538 success 227, unsuccess 167, failed 0 00:21:41.538 10:02:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:21:41.538 10:02:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:21:41.538 10:02:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:41.538 10:02:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:21:41.538 10:02:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:41.538 10:02:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:21:41.538 10:02:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:41.538 10:02:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:41.538 rmmod nvme_tcp 00:21:41.538 rmmod nvme_fabrics 00:21:41.538 rmmod nvme_keyring 00:21:41.538 10:02:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:41.538 10:02:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:21:41.538 10:02:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:21:41.538 10:02:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 76231 ']' 00:21:41.538 10:02:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 76231 00:21:41.538 10:02:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 76231 ']' 00:21:41.538 10:02:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 76231 00:21:41.538 10:02:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:21:41.538 10:02:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:41.538 10:02:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76231 00:21:41.538 10:02:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:41.538 10:02:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:41.538 killing process with pid 76231 00:21:41.538 10:02:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76231' 00:21:41.538 10:02:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 76231 00:21:41.538 10:02:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 76231 00:21:41.538 10:02:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:41.538 10:02:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:41.538 10:02:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:41.538 10:02:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:41.538 10:02:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:41.538 10:02:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:41.538 10:02:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:41.538 10:02:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:41.538 10:02:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:41.538 00:21:41.538 real 0m24.548s 00:21:41.538 user 0m41.099s 00:21:41.538 sys 0m5.556s 00:21:41.538 10:02:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:41.538 10:02:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:41.538 ************************************ 00:21:41.538 END TEST nvmf_zcopy 00:21:41.538 ************************************ 00:21:41.538 10:02:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:41.538 10:02:54 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:21:41.538 10:02:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:41.538 10:02:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:41.538 10:02:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:41.538 ************************************ 00:21:41.538 START TEST nvmf_nmic 00:21:41.538 ************************************ 00:21:41.538 10:02:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:21:41.538 * Looking for test storage... 00:21:41.538 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:41.538 10:02:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:41.538 10:02:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:21:41.538 10:02:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:41.538 10:02:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:41.538 10:02:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:41.538 10:02:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:41.538 10:02:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:41.538 10:02:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:41.538 10:02:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:41.538 10:02:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:41.538 10:02:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:41.538 10:02:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:41.538 10:02:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:21:41.538 10:02:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:21:41.538 10:02:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:41.538 10:02:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:41.538 10:02:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:41.538 10:02:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:41.538 10:02:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:41.538 10:02:55 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:41.538 10:02:55 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:41.538 10:02:55 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:41.538 10:02:55 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.538 10:02:55 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.538 10:02:55 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.538 10:02:55 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:21:41.539 10:02:55 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.539 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:21:41.539 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:41.539 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:41.539 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:41.539 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:41.539 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:41.539 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:41.539 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:41.539 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:41.539 10:02:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:41.539 10:02:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:41.539 10:02:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:21:41.539 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:41.539 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:41.539 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:41.539 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:41.539 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:41.539 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:41.539 10:02:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:41.539 10:02:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:41.539 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:41.539 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:41.539 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:41.539 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:41.539 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:41.539 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:41.539 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:41.539 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:41.539 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:41.539 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:41.539 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:41.539 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:41.539 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:41.539 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:41.539 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:41.539 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:41.539 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:41.539 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:41.539 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:41.539 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:41.539 Cannot find device "nvmf_tgt_br" 00:21:41.539 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:21:41.539 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:41.539 Cannot find device "nvmf_tgt_br2" 00:21:41.539 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:21:41.539 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:41.539 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:41.539 Cannot find device "nvmf_tgt_br" 00:21:41.539 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:21:41.539 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:41.539 Cannot find device "nvmf_tgt_br2" 00:21:41.539 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:21:41.539 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:41.799 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:41.799 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:41.799 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:41.799 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:21:41.799 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:41.799 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:41.799 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:21:41.799 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:41.799 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:41.799 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:41.799 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:41.799 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:41.799 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:41.799 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:41.799 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:41.799 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:41.799 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:41.799 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:41.799 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:41.799 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:41.799 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:41.799 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:41.799 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:41.799 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:41.799 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:41.799 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:41.799 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:41.799 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:41.799 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:41.799 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:41.799 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:41.799 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:41.799 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.522 ms 00:21:41.799 00:21:41.799 --- 10.0.0.2 ping statistics --- 00:21:41.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.799 rtt min/avg/max/mdev = 0.522/0.522/0.522/0.000 ms 00:21:41.799 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:41.799 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:41.799 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:21:41.799 00:21:41.799 --- 10.0.0.3 ping statistics --- 00:21:41.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.799 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:21:41.799 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:41.799 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:41.799 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:21:41.799 00:21:41.799 --- 10.0.0.1 ping statistics --- 00:21:41.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.799 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:21:41.799 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:41.799 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:21:41.799 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:41.799 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:41.799 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:41.799 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:41.799 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:41.799 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:41.799 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:42.058 10:02:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:21:42.059 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:42.059 10:02:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:42.059 10:02:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:42.059 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=76726 00:21:42.059 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:42.059 10:02:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 76726 00:21:42.059 10:02:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 76726 ']' 00:21:42.059 10:02:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:42.059 10:02:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:42.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:42.059 10:02:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:42.059 10:02:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:42.059 10:02:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:42.059 [2024-07-15 10:02:55.450690] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:42.059 [2024-07-15 10:02:55.450769] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:42.059 [2024-07-15 10:02:55.590896] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:42.317 [2024-07-15 10:02:55.696383] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:42.317 [2024-07-15 10:02:55.696430] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:42.317 [2024-07-15 10:02:55.696437] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:42.317 [2024-07-15 10:02:55.696443] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:42.318 [2024-07-15 10:02:55.696448] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:42.318 [2024-07-15 10:02:55.696565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:42.318 [2024-07-15 10:02:55.697017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:42.318 [2024-07-15 10:02:55.697121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:42.318 [2024-07-15 10:02:55.697134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:42.887 10:02:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:42.887 10:02:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:21:42.887 10:02:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:42.887 10:02:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:42.887 10:02:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:42.887 10:02:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:42.887 10:02:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:42.887 10:02:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.887 10:02:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:42.887 [2024-07-15 10:02:56.398999] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:42.887 10:02:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.887 10:02:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:42.887 10:02:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.887 10:02:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:42.887 Malloc0 00:21:42.887 10:02:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.887 10:02:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:21:42.887 10:02:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.887 10:02:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:42.887 10:02:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.887 10:02:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:42.887 10:02:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.887 10:02:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:42.887 10:02:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.887 10:02:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:42.887 10:02:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.887 10:02:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:43.147 [2024-07-15 10:02:56.473906] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:43.147 10:02:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.147 test case1: single bdev can't be used in multiple subsystems 00:21:43.147 10:02:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:21:43.147 10:02:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:21:43.147 10:02:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.147 10:02:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:43.147 10:02:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.147 10:02:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:43.147 10:02:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.147 10:02:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:43.147 10:02:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.147 10:02:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:21:43.147 10:02:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:21:43.147 10:02:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.147 10:02:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:43.147 [2024-07-15 10:02:56.509728] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:21:43.147 [2024-07-15 10:02:56.509770] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:21:43.147 [2024-07-15 10:02:56.509794] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:43.147 2024/07/15 10:02:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:43.147 request: 00:21:43.147 { 00:21:43.147 "method": "nvmf_subsystem_add_ns", 00:21:43.147 "params": { 00:21:43.147 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:21:43.147 "namespace": { 00:21:43.147 "bdev_name": "Malloc0", 00:21:43.147 "no_auto_visible": false 00:21:43.147 } 00:21:43.147 } 00:21:43.147 } 00:21:43.147 Got JSON-RPC error response 00:21:43.147 GoRPCClient: error on JSON-RPC call 00:21:43.147 10:02:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:43.147 10:02:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:21:43.147 10:02:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:21:43.147 Adding namespace failed - expected result. 00:21:43.147 10:02:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:21:43.147 test case2: host connect to nvmf target in multiple paths 00:21:43.147 10:02:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:21:43.147 10:02:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:43.147 10:02:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.147 10:02:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:43.147 [2024-07-15 10:02:56.525816] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:43.147 10:02:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.147 10:02:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid=a2b6b25a-cc90-4aea-9f09-c06f8a634aec -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:43.147 10:02:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid=a2b6b25a-cc90-4aea-9f09-c06f8a634aec -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:21:43.413 10:02:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:21:43.413 10:02:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:21:43.413 10:02:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:21:43.413 10:02:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:21:43.413 10:02:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:21:45.326 10:02:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:21:45.326 10:02:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:21:45.326 10:02:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:21:45.326 10:02:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:21:45.326 10:02:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:21:45.326 10:02:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:21:45.326 10:02:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:21:45.585 [global] 00:21:45.585 thread=1 00:21:45.585 invalidate=1 00:21:45.585 rw=write 00:21:45.585 time_based=1 00:21:45.585 runtime=1 00:21:45.585 ioengine=libaio 00:21:45.585 direct=1 00:21:45.585 bs=4096 00:21:45.585 iodepth=1 00:21:45.585 norandommap=0 00:21:45.585 numjobs=1 00:21:45.585 00:21:45.585 verify_dump=1 00:21:45.585 verify_backlog=512 00:21:45.585 verify_state_save=0 00:21:45.585 do_verify=1 00:21:45.585 verify=crc32c-intel 00:21:45.585 [job0] 00:21:45.585 filename=/dev/nvme0n1 00:21:45.585 Could not set queue depth (nvme0n1) 00:21:45.585 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:45.585 fio-3.35 00:21:45.585 Starting 1 thread 00:21:46.967 00:21:46.967 job0: (groupid=0, jobs=1): err= 0: pid=76839: Mon Jul 15 10:03:00 2024 00:21:46.967 read: IOPS=4430, BW=17.3MiB/s (18.1MB/s)(17.3MiB/1001msec) 00:21:46.967 slat (nsec): min=7425, max=31310, avg=9916.13, stdev=2044.95 00:21:46.967 clat (usec): min=86, max=3249, avg=112.99, stdev=67.22 00:21:46.967 lat (usec): min=94, max=3266, avg=122.91, stdev=67.61 00:21:46.967 clat percentiles (usec): 00:21:46.967 | 1.00th=[ 92], 5.00th=[ 96], 10.00th=[ 99], 20.00th=[ 103], 00:21:46.967 | 30.00th=[ 106], 40.00th=[ 109], 50.00th=[ 111], 60.00th=[ 113], 00:21:46.967 | 70.00th=[ 116], 80.00th=[ 119], 90.00th=[ 123], 95.00th=[ 128], 00:21:46.967 | 99.00th=[ 141], 99.50th=[ 153], 99.90th=[ 693], 99.95th=[ 1012], 00:21:46.967 | 99.99th=[ 3261] 00:21:46.967 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 00:21:46.967 slat (usec): min=10, max=134, avg=15.58, stdev= 6.07 00:21:46.967 clat (usec): min=54, max=221, avg=81.09, stdev= 8.45 00:21:46.967 lat (usec): min=73, max=264, avg=96.66, stdev=11.37 00:21:46.967 clat percentiles (usec): 00:21:46.967 | 1.00th=[ 67], 5.00th=[ 70], 10.00th=[ 73], 20.00th=[ 76], 00:21:46.967 | 30.00th=[ 78], 40.00th=[ 79], 50.00th=[ 81], 60.00th=[ 83], 00:21:46.967 | 70.00th=[ 84], 80.00th=[ 87], 90.00th=[ 91], 95.00th=[ 95], 00:21:46.967 | 99.00th=[ 106], 99.50th=[ 111], 99.90th=[ 123], 99.95th=[ 184], 00:21:46.967 | 99.99th=[ 223] 00:21:46.967 bw ( KiB/s): min=20480, max=20480, per=100.00%, avg=20480.00, stdev= 0.00, samples=1 00:21:46.967 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:21:46.967 lat (usec) : 100=55.36%, 250=44.53%, 500=0.04%, 750=0.02%, 1000=0.01% 00:21:46.967 lat (msec) : 2=0.01%, 4=0.02% 00:21:46.967 cpu : usr=1.90%, sys=8.20%, ctx=9044, majf=0, minf=2 00:21:46.967 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:46.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:46.967 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:46.967 issued rwts: total=4435,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:46.967 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:46.967 00:21:46.967 Run status group 0 (all jobs): 00:21:46.967 READ: bw=17.3MiB/s (18.1MB/s), 17.3MiB/s-17.3MiB/s (18.1MB/s-18.1MB/s), io=17.3MiB (18.2MB), run=1001-1001msec 00:21:46.967 WRITE: bw=18.0MiB/s (18.9MB/s), 18.0MiB/s-18.0MiB/s (18.9MB/s-18.9MB/s), io=18.0MiB (18.9MB), run=1001-1001msec 00:21:46.967 00:21:46.967 Disk stats (read/write): 00:21:46.967 nvme0n1: ios=4077/4096, merge=0/0, ticks=473/365, in_queue=838, util=90.87% 00:21:46.967 10:03:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:46.967 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:21:46.967 10:03:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:46.967 10:03:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:21:46.967 10:03:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:46.967 10:03:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:46.967 10:03:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:46.967 10:03:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:46.967 10:03:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:21:46.967 10:03:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:21:46.967 10:03:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:21:46.967 10:03:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:46.967 10:03:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:21:46.968 10:03:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:46.968 10:03:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:21:46.968 10:03:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:46.968 10:03:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:46.968 rmmod nvme_tcp 00:21:46.968 rmmod nvme_fabrics 00:21:46.968 rmmod nvme_keyring 00:21:46.968 10:03:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:46.968 10:03:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:21:46.968 10:03:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:21:46.968 10:03:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 76726 ']' 00:21:46.968 10:03:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 76726 00:21:46.968 10:03:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 76726 ']' 00:21:46.968 10:03:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 76726 00:21:46.968 10:03:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:21:46.968 10:03:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:46.968 10:03:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76726 00:21:46.968 10:03:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:46.968 10:03:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:46.968 10:03:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76726' 00:21:46.968 killing process with pid 76726 00:21:46.968 10:03:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 76726 00:21:46.968 10:03:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 76726 00:21:47.229 10:03:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:47.229 10:03:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:47.229 10:03:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:47.229 10:03:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:47.229 10:03:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:47.229 10:03:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.229 10:03:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:47.229 10:03:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:47.229 10:03:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:47.229 00:21:47.229 real 0m5.858s 00:21:47.229 user 0m19.682s 00:21:47.229 sys 0m1.262s 00:21:47.229 10:03:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:47.229 10:03:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:47.229 ************************************ 00:21:47.229 END TEST nvmf_nmic 00:21:47.229 ************************************ 00:21:47.229 10:03:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:47.229 10:03:00 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:21:47.229 10:03:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:47.229 10:03:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:47.229 10:03:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:47.229 ************************************ 00:21:47.229 START TEST nvmf_fio_target 00:21:47.229 ************************************ 00:21:47.229 10:03:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:21:47.492 * Looking for test storage... 00:21:47.492 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:47.492 10:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:47.492 Cannot find device "nvmf_tgt_br" 00:21:47.493 10:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:21:47.493 10:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:47.493 Cannot find device "nvmf_tgt_br2" 00:21:47.493 10:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:21:47.493 10:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:47.493 10:03:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:47.493 Cannot find device "nvmf_tgt_br" 00:21:47.493 10:03:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:21:47.493 10:03:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:47.493 Cannot find device "nvmf_tgt_br2" 00:21:47.493 10:03:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:21:47.493 10:03:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:47.493 10:03:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:47.751 10:03:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:47.751 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:47.751 10:03:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:21:47.751 10:03:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:47.751 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:47.751 10:03:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:21:47.751 10:03:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:47.751 10:03:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:47.751 10:03:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:47.751 10:03:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:47.751 10:03:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:47.751 10:03:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:47.751 10:03:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:47.751 10:03:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:47.751 10:03:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:47.751 10:03:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:47.751 10:03:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:47.751 10:03:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:47.751 10:03:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:47.751 10:03:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:47.751 10:03:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:47.751 10:03:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:47.751 10:03:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:47.751 10:03:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:47.751 10:03:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:47.751 10:03:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:47.751 10:03:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:48.010 10:03:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:48.010 10:03:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:48.010 10:03:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:48.010 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:48.010 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:21:48.010 00:21:48.010 --- 10.0.0.2 ping statistics --- 00:21:48.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.010 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:21:48.010 10:03:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:48.010 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:48.010 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:21:48.010 00:21:48.010 --- 10.0.0.3 ping statistics --- 00:21:48.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.010 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:21:48.010 10:03:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:48.010 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:48.010 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:21:48.010 00:21:48.010 --- 10.0.0.1 ping statistics --- 00:21:48.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.010 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:21:48.010 10:03:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:48.010 10:03:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:21:48.010 10:03:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:48.010 10:03:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:48.010 10:03:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:48.010 10:03:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:48.010 10:03:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:48.010 10:03:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:48.010 10:03:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:48.010 10:03:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:21:48.010 10:03:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:48.010 10:03:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:48.010 10:03:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.010 10:03:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=77020 00:21:48.010 10:03:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 77020 00:21:48.010 10:03:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:48.010 10:03:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 77020 ']' 00:21:48.010 10:03:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:48.010 10:03:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:48.010 10:03:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:48.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:48.010 10:03:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:48.010 10:03:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.010 [2024-07-15 10:03:01.477606] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:48.010 [2024-07-15 10:03:01.478129] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:48.269 [2024-07-15 10:03:01.604966] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:48.269 [2024-07-15 10:03:01.724580] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:48.269 [2024-07-15 10:03:01.724624] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:48.269 [2024-07-15 10:03:01.724630] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:48.269 [2024-07-15 10:03:01.724635] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:48.269 [2024-07-15 10:03:01.724640] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:48.269 [2024-07-15 10:03:01.724972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:48.269 [2024-07-15 10:03:01.725850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:48.269 [2024-07-15 10:03:01.725909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:48.270 [2024-07-15 10:03:01.725913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:48.837 10:03:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:48.838 10:03:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:21:48.838 10:03:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:48.838 10:03:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:48.838 10:03:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.838 10:03:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:48.838 10:03:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:49.097 [2024-07-15 10:03:02.611317] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:49.097 10:03:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:49.357 10:03:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:21:49.357 10:03:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:49.616 10:03:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:21:49.616 10:03:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:49.874 10:03:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:21:49.874 10:03:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:50.131 10:03:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:21:50.131 10:03:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:21:50.389 10:03:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:50.659 10:03:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:21:50.659 10:03:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:50.919 10:03:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:21:50.919 10:03:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:51.177 10:03:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:21:51.177 10:03:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:21:51.177 10:03:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:21:51.437 10:03:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:21:51.437 10:03:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:51.696 10:03:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:21:51.696 10:03:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:51.956 10:03:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:52.215 [2024-07-15 10:03:05.545827] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:52.215 10:03:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:21:52.474 10:03:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:21:52.474 10:03:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid=a2b6b25a-cc90-4aea-9f09-c06f8a634aec -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:52.734 10:03:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:21:52.734 10:03:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:21:52.734 10:03:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:21:52.734 10:03:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:21:52.734 10:03:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:21:52.734 10:03:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:21:54.640 10:03:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:21:54.640 10:03:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:21:54.640 10:03:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:21:54.640 10:03:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:21:54.640 10:03:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:21:54.640 10:03:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:21:54.640 10:03:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:21:54.899 [global] 00:21:54.899 thread=1 00:21:54.899 invalidate=1 00:21:54.899 rw=write 00:21:54.899 time_based=1 00:21:54.899 runtime=1 00:21:54.899 ioengine=libaio 00:21:54.899 direct=1 00:21:54.899 bs=4096 00:21:54.899 iodepth=1 00:21:54.899 norandommap=0 00:21:54.899 numjobs=1 00:21:54.899 00:21:54.899 verify_dump=1 00:21:54.899 verify_backlog=512 00:21:54.899 verify_state_save=0 00:21:54.899 do_verify=1 00:21:54.899 verify=crc32c-intel 00:21:54.899 [job0] 00:21:54.899 filename=/dev/nvme0n1 00:21:54.899 [job1] 00:21:54.899 filename=/dev/nvme0n2 00:21:54.899 [job2] 00:21:54.899 filename=/dev/nvme0n3 00:21:54.899 [job3] 00:21:54.899 filename=/dev/nvme0n4 00:21:54.899 Could not set queue depth (nvme0n1) 00:21:54.899 Could not set queue depth (nvme0n2) 00:21:54.899 Could not set queue depth (nvme0n3) 00:21:54.899 Could not set queue depth (nvme0n4) 00:21:54.899 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:54.899 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:54.899 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:54.899 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:54.899 fio-3.35 00:21:54.899 Starting 4 threads 00:21:56.272 00:21:56.272 job0: (groupid=0, jobs=1): err= 0: pid=77310: Mon Jul 15 10:03:09 2024 00:21:56.272 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:21:56.272 slat (nsec): min=8057, max=86982, avg=11073.93, stdev=4242.16 00:21:56.272 clat (usec): min=129, max=426, avg=247.68, stdev=22.09 00:21:56.272 lat (usec): min=139, max=436, avg=258.75, stdev=22.07 00:21:56.272 clat percentiles (usec): 00:21:56.272 | 1.00th=[ 155], 5.00th=[ 221], 10.00th=[ 227], 20.00th=[ 235], 00:21:56.272 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 253], 00:21:56.272 | 70.00th=[ 258], 80.00th=[ 262], 90.00th=[ 269], 95.00th=[ 277], 00:21:56.272 | 99.00th=[ 293], 99.50th=[ 306], 99.90th=[ 363], 99.95th=[ 416], 00:21:56.272 | 99.99th=[ 429] 00:21:56.272 write: IOPS=2147, BW=8591KiB/s (8798kB/s)(8600KiB/1001msec); 0 zone resets 00:21:56.272 slat (usec): min=11, max=184, avg=18.76, stdev=11.18 00:21:56.272 clat (usec): min=92, max=381, avg=197.22, stdev=26.05 00:21:56.272 lat (usec): min=104, max=518, avg=215.99, stdev=26.58 00:21:56.272 clat percentiles (usec): 00:21:56.272 | 1.00th=[ 103], 5.00th=[ 133], 10.00th=[ 174], 20.00th=[ 188], 00:21:56.272 | 30.00th=[ 194], 40.00th=[ 198], 50.00th=[ 202], 60.00th=[ 204], 00:21:56.272 | 70.00th=[ 208], 80.00th=[ 212], 90.00th=[ 221], 95.00th=[ 229], 00:21:56.272 | 99.00th=[ 245], 99.50th=[ 249], 99.90th=[ 269], 99.95th=[ 281], 00:21:56.272 | 99.99th=[ 383] 00:21:56.272 bw ( KiB/s): min= 8280, max= 8280, per=24.19%, avg=8280.00, stdev= 0.00, samples=1 00:21:56.272 iops : min= 2070, max= 2070, avg=2070.00, stdev= 0.00, samples=1 00:21:56.272 lat (usec) : 100=0.26%, 250=75.92%, 500=23.82% 00:21:56.272 cpu : usr=1.20%, sys=4.50%, ctx=4198, majf=0, minf=9 00:21:56.272 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:56.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:56.272 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:56.272 issued rwts: total=2048,2150,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:56.272 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:56.272 job1: (groupid=0, jobs=1): err= 0: pid=77311: Mon Jul 15 10:03:09 2024 00:21:56.272 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:21:56.272 slat (nsec): min=5238, max=59805, avg=8801.88, stdev=2900.38 00:21:56.272 clat (usec): min=137, max=2401, avg=249.16, stdev=57.22 00:21:56.272 lat (usec): min=144, max=2415, avg=257.96, stdev=57.76 00:21:56.272 clat percentiles (usec): 00:21:56.272 | 1.00th=[ 200], 5.00th=[ 210], 10.00th=[ 217], 20.00th=[ 225], 00:21:56.272 | 30.00th=[ 233], 40.00th=[ 241], 50.00th=[ 247], 60.00th=[ 251], 00:21:56.272 | 70.00th=[ 255], 80.00th=[ 265], 90.00th=[ 285], 95.00th=[ 310], 00:21:56.272 | 99.00th=[ 330], 99.50th=[ 343], 99.90th=[ 627], 99.95th=[ 857], 00:21:56.272 | 99.99th=[ 2409] 00:21:56.272 write: IOPS=2054, BW=8220KiB/s (8417kB/s)(8228KiB/1001msec); 0 zone resets 00:21:56.272 slat (usec): min=7, max=121, avg=17.99, stdev=10.32 00:21:56.272 clat (usec): min=93, max=7208, avg=208.83, stdev=280.54 00:21:56.272 lat (usec): min=109, max=7221, avg=226.82, stdev=280.58 00:21:56.272 clat percentiles (usec): 00:21:56.272 | 1.00th=[ 104], 5.00th=[ 123], 10.00th=[ 157], 20.00th=[ 176], 00:21:56.272 | 30.00th=[ 188], 40.00th=[ 196], 50.00th=[ 202], 60.00th=[ 206], 00:21:56.272 | 70.00th=[ 210], 80.00th=[ 217], 90.00th=[ 225], 95.00th=[ 233], 00:21:56.272 | 99.00th=[ 265], 99.50th=[ 306], 99.90th=[ 6259], 99.95th=[ 6259], 00:21:56.272 | 99.99th=[ 7177] 00:21:56.272 bw ( KiB/s): min= 8192, max= 8192, per=23.93%, avg=8192.00, stdev= 0.00, samples=1 00:21:56.272 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:21:56.272 lat (usec) : 100=0.19%, 250=77.76%, 500=21.78%, 750=0.05%, 1000=0.02% 00:21:56.272 lat (msec) : 2=0.02%, 4=0.07%, 10=0.10% 00:21:56.272 cpu : usr=0.60%, sys=4.60%, ctx=4110, majf=0, minf=5 00:21:56.272 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:56.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:56.272 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:56.272 issued rwts: total=2048,2057,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:56.272 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:56.272 job2: (groupid=0, jobs=1): err= 0: pid=77312: Mon Jul 15 10:03:09 2024 00:21:56.272 read: IOPS=1998, BW=7992KiB/s (8184kB/s)(8000KiB/1001msec) 00:21:56.272 slat (nsec): min=7398, max=81631, avg=11905.18, stdev=5835.40 00:21:56.272 clat (usec): min=158, max=1533, avg=258.34, stdev=39.93 00:21:56.272 lat (usec): min=167, max=1542, avg=270.25, stdev=40.39 00:21:56.272 clat percentiles (usec): 00:21:56.272 | 1.00th=[ 217], 5.00th=[ 231], 10.00th=[ 237], 20.00th=[ 243], 00:21:56.272 | 30.00th=[ 247], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 255], 00:21:56.272 | 70.00th=[ 260], 80.00th=[ 265], 90.00th=[ 277], 95.00th=[ 326], 00:21:56.272 | 99.00th=[ 363], 99.50th=[ 379], 99.90th=[ 570], 99.95th=[ 1532], 00:21:56.272 | 99.99th=[ 1532] 00:21:56.272 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:21:56.272 slat (usec): min=12, max=143, avg=17.56, stdev= 5.10 00:21:56.272 clat (usec): min=119, max=400, avg=204.00, stdev=15.99 00:21:56.272 lat (usec): min=141, max=414, avg=221.55, stdev=15.79 00:21:56.272 clat percentiles (usec): 00:21:56.272 | 1.00th=[ 169], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 192], 00:21:56.272 | 30.00th=[ 196], 40.00th=[ 200], 50.00th=[ 204], 60.00th=[ 206], 00:21:56.272 | 70.00th=[ 212], 80.00th=[ 217], 90.00th=[ 225], 95.00th=[ 231], 00:21:56.272 | 99.00th=[ 249], 99.50th=[ 253], 99.90th=[ 269], 99.95th=[ 289], 00:21:56.272 | 99.99th=[ 400] 00:21:56.272 bw ( KiB/s): min= 8192, max= 8192, per=23.93%, avg=8192.00, stdev= 0.00, samples=1 00:21:56.272 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:21:56.272 lat (usec) : 250=70.95%, 500=28.98%, 750=0.05% 00:21:56.272 lat (msec) : 2=0.02% 00:21:56.272 cpu : usr=0.60%, sys=4.80%, ctx=4050, majf=0, minf=7 00:21:56.272 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:56.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:56.272 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:56.272 issued rwts: total=2000,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:56.272 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:56.272 job3: (groupid=0, jobs=1): err= 0: pid=77313: Mon Jul 15 10:03:09 2024 00:21:56.272 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:21:56.272 slat (nsec): min=6366, max=25600, avg=9099.92, stdev=1479.61 00:21:56.272 clat (usec): min=134, max=497, avg=243.71, stdev=35.64 00:21:56.272 lat (usec): min=143, max=506, avg=252.81, stdev=35.78 00:21:56.272 clat percentiles (usec): 00:21:56.272 | 1.00th=[ 141], 5.00th=[ 192], 10.00th=[ 210], 20.00th=[ 221], 00:21:56.272 | 30.00th=[ 231], 40.00th=[ 239], 50.00th=[ 247], 60.00th=[ 251], 00:21:56.272 | 70.00th=[ 258], 80.00th=[ 265], 90.00th=[ 277], 95.00th=[ 310], 00:21:56.273 | 99.00th=[ 338], 99.50th=[ 347], 99.90th=[ 383], 99.95th=[ 400], 00:21:56.273 | 99.99th=[ 498] 00:21:56.273 write: IOPS=2309, BW=9239KiB/s (9460kB/s)(9248KiB/1001msec); 0 zone resets 00:21:56.273 slat (usec): min=7, max=122, avg=15.38, stdev= 6.82 00:21:56.273 clat (usec): min=100, max=1643, avg=191.06, stdev=44.66 00:21:56.273 lat (usec): min=113, max=1656, avg=206.44, stdev=45.56 00:21:56.273 clat percentiles (usec): 00:21:56.273 | 1.00th=[ 111], 5.00th=[ 121], 10.00th=[ 131], 20.00th=[ 165], 00:21:56.273 | 30.00th=[ 186], 40.00th=[ 194], 50.00th=[ 200], 60.00th=[ 204], 00:21:56.273 | 70.00th=[ 208], 80.00th=[ 215], 90.00th=[ 223], 95.00th=[ 231], 00:21:56.273 | 99.00th=[ 247], 99.50th=[ 269], 99.90th=[ 297], 99.95th=[ 338], 00:21:56.273 | 99.99th=[ 1647] 00:21:56.273 bw ( KiB/s): min= 8192, max= 8192, per=23.93%, avg=8192.00, stdev= 0.00, samples=1 00:21:56.273 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:21:56.273 lat (usec) : 250=78.92%, 500=21.06% 00:21:56.273 lat (msec) : 2=0.02% 00:21:56.273 cpu : usr=0.80%, sys=4.40%, ctx=4360, majf=0, minf=16 00:21:56.273 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:56.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:56.273 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:56.273 issued rwts: total=2048,2312,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:56.273 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:56.273 00:21:56.273 Run status group 0 (all jobs): 00:21:56.273 READ: bw=31.8MiB/s (33.3MB/s), 7992KiB/s-8184KiB/s (8184kB/s-8380kB/s), io=31.8MiB (33.4MB), run=1001-1001msec 00:21:56.273 WRITE: bw=33.4MiB/s (35.1MB/s), 8184KiB/s-9239KiB/s (8380kB/s-9460kB/s), io=33.5MiB (35.1MB), run=1001-1001msec 00:21:56.273 00:21:56.273 Disk stats (read/write): 00:21:56.273 nvme0n1: ios=1627/2048, merge=0/0, ticks=422/417, in_queue=839, util=87.58% 00:21:56.273 nvme0n2: ios=1571/1957, merge=0/0, ticks=517/421, in_queue=938, util=90.46% 00:21:56.273 nvme0n3: ios=1536/1949, merge=0/0, ticks=409/404, in_queue=813, util=89.17% 00:21:56.273 nvme0n4: ios=1643/2048, merge=0/0, ticks=417/409, in_queue=826, util=89.63% 00:21:56.273 10:03:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:21:56.273 [global] 00:21:56.273 thread=1 00:21:56.273 invalidate=1 00:21:56.273 rw=randwrite 00:21:56.273 time_based=1 00:21:56.273 runtime=1 00:21:56.273 ioengine=libaio 00:21:56.273 direct=1 00:21:56.273 bs=4096 00:21:56.273 iodepth=1 00:21:56.273 norandommap=0 00:21:56.273 numjobs=1 00:21:56.273 00:21:56.273 verify_dump=1 00:21:56.273 verify_backlog=512 00:21:56.273 verify_state_save=0 00:21:56.273 do_verify=1 00:21:56.273 verify=crc32c-intel 00:21:56.273 [job0] 00:21:56.273 filename=/dev/nvme0n1 00:21:56.273 [job1] 00:21:56.273 filename=/dev/nvme0n2 00:21:56.273 [job2] 00:21:56.273 filename=/dev/nvme0n3 00:21:56.273 [job3] 00:21:56.273 filename=/dev/nvme0n4 00:21:56.273 Could not set queue depth (nvme0n1) 00:21:56.273 Could not set queue depth (nvme0n2) 00:21:56.273 Could not set queue depth (nvme0n3) 00:21:56.273 Could not set queue depth (nvme0n4) 00:21:56.273 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:56.273 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:56.273 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:56.273 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:56.273 fio-3.35 00:21:56.273 Starting 4 threads 00:21:57.641 00:21:57.641 job0: (groupid=0, jobs=1): err= 0: pid=77366: Mon Jul 15 10:03:11 2024 00:21:57.641 read: IOPS=3584, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1000msec) 00:21:57.641 slat (nsec): min=6884, max=27992, avg=8458.53, stdev=1324.38 00:21:57.641 clat (usec): min=100, max=479, avg=137.69, stdev=14.19 00:21:57.641 lat (usec): min=107, max=488, avg=146.15, stdev=14.51 00:21:57.641 clat percentiles (usec): 00:21:57.641 | 1.00th=[ 116], 5.00th=[ 122], 10.00th=[ 126], 20.00th=[ 130], 00:21:57.641 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 139], 00:21:57.641 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 151], 95.00th=[ 157], 00:21:57.641 | 99.00th=[ 178], 99.50th=[ 188], 99.90th=[ 208], 99.95th=[ 437], 00:21:57.641 | 99.99th=[ 478] 00:21:57.641 write: IOPS=3776, BW=14.8MiB/s (15.5MB/s)(14.8MiB/1000msec); 0 zone resets 00:21:57.641 slat (usec): min=10, max=150, avg=14.00, stdev= 6.08 00:21:57.641 clat (usec): min=78, max=201, avg=110.03, stdev=10.10 00:21:57.641 lat (usec): min=90, max=351, avg=124.03, stdev=13.13 00:21:57.641 clat percentiles (usec): 00:21:57.641 | 1.00th=[ 91], 5.00th=[ 96], 10.00th=[ 99], 20.00th=[ 102], 00:21:57.641 | 30.00th=[ 104], 40.00th=[ 108], 50.00th=[ 110], 60.00th=[ 112], 00:21:57.641 | 70.00th=[ 115], 80.00th=[ 118], 90.00th=[ 124], 95.00th=[ 129], 00:21:57.641 | 99.00th=[ 139], 99.50th=[ 143], 99.90th=[ 155], 99.95th=[ 172], 00:21:57.641 | 99.99th=[ 202] 00:21:57.641 bw ( KiB/s): min=16384, max=16384, per=35.30%, avg=16384.00, stdev= 0.00, samples=1 00:21:57.641 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:21:57.641 lat (usec) : 100=7.36%, 250=92.60%, 500=0.04% 00:21:57.641 cpu : usr=1.60%, sys=6.00%, ctx=7362, majf=0, minf=17 00:21:57.641 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:57.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:57.641 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:57.641 issued rwts: total=3584,3776,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:57.641 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:57.641 job1: (groupid=0, jobs=1): err= 0: pid=77367: Mon Jul 15 10:03:11 2024 00:21:57.641 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:21:57.641 slat (nsec): min=7073, max=32354, avg=8754.95, stdev=1849.63 00:21:57.641 clat (usec): min=109, max=272, avg=137.47, stdev=10.48 00:21:57.641 lat (usec): min=117, max=280, avg=146.23, stdev=10.85 00:21:57.641 clat percentiles (usec): 00:21:57.641 | 1.00th=[ 117], 5.00th=[ 123], 10.00th=[ 126], 20.00th=[ 130], 00:21:57.641 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 139], 00:21:57.641 | 70.00th=[ 143], 80.00th=[ 145], 90.00th=[ 151], 95.00th=[ 155], 00:21:57.641 | 99.00th=[ 169], 99.50th=[ 178], 99.90th=[ 200], 99.95th=[ 212], 00:21:57.641 | 99.99th=[ 273] 00:21:57.641 write: IOPS=3740, BW=14.6MiB/s (15.3MB/s)(14.6MiB/1001msec); 0 zone resets 00:21:57.641 slat (usec): min=10, max=144, avg=14.39, stdev= 6.20 00:21:57.641 clat (usec): min=83, max=243, avg=110.56, stdev=10.90 00:21:57.641 lat (usec): min=94, max=347, avg=124.94, stdev=13.72 00:21:57.641 clat percentiles (usec): 00:21:57.642 | 1.00th=[ 91], 5.00th=[ 97], 10.00th=[ 99], 20.00th=[ 102], 00:21:57.642 | 30.00th=[ 105], 40.00th=[ 108], 50.00th=[ 110], 60.00th=[ 112], 00:21:57.642 | 70.00th=[ 115], 80.00th=[ 119], 90.00th=[ 124], 95.00th=[ 129], 00:21:57.642 | 99.00th=[ 145], 99.50th=[ 153], 99.90th=[ 194], 99.95th=[ 206], 00:21:57.642 | 99.99th=[ 245] 00:21:57.642 bw ( KiB/s): min=16384, max=16384, per=35.30%, avg=16384.00, stdev= 0.00, samples=1 00:21:57.642 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:21:57.642 lat (usec) : 100=6.54%, 250=93.45%, 500=0.01% 00:21:57.642 cpu : usr=1.50%, sys=6.20%, ctx=7328, majf=0, minf=11 00:21:57.642 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:57.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:57.642 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:57.642 issued rwts: total=3584,3744,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:57.642 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:57.642 job2: (groupid=0, jobs=1): err= 0: pid=77368: Mon Jul 15 10:03:11 2024 00:21:57.642 read: IOPS=1896, BW=7584KiB/s (7766kB/s)(7592KiB/1001msec) 00:21:57.642 slat (nsec): min=5866, max=60260, avg=8870.99, stdev=4093.88 00:21:57.642 clat (usec): min=160, max=40723, avg=282.09, stdev=928.99 00:21:57.642 lat (usec): min=167, max=40734, avg=290.96, stdev=929.05 00:21:57.642 clat percentiles (usec): 00:21:57.642 | 1.00th=[ 217], 5.00th=[ 233], 10.00th=[ 241], 20.00th=[ 247], 00:21:57.642 | 30.00th=[ 253], 40.00th=[ 258], 50.00th=[ 262], 60.00th=[ 265], 00:21:57.642 | 70.00th=[ 269], 80.00th=[ 273], 90.00th=[ 281], 95.00th=[ 289], 00:21:57.642 | 99.00th=[ 310], 99.50th=[ 343], 99.90th=[ 562], 99.95th=[40633], 00:21:57.642 | 99.99th=[40633] 00:21:57.642 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:21:57.642 slat (usec): min=6, max=227, avg=14.02, stdev= 9.20 00:21:57.642 clat (usec): min=29, max=1504, avg=202.44, stdev=41.87 00:21:57.642 lat (usec): min=113, max=1518, avg=216.46, stdev=41.49 00:21:57.642 clat percentiles (usec): 00:21:57.642 | 1.00th=[ 120], 5.00th=[ 155], 10.00th=[ 174], 20.00th=[ 184], 00:21:57.642 | 30.00th=[ 192], 40.00th=[ 198], 50.00th=[ 202], 60.00th=[ 206], 00:21:57.642 | 70.00th=[ 210], 80.00th=[ 219], 90.00th=[ 231], 95.00th=[ 247], 00:21:57.642 | 99.00th=[ 297], 99.50th=[ 322], 99.90th=[ 347], 99.95th=[ 416], 00:21:57.642 | 99.99th=[ 1500] 00:21:57.642 bw ( KiB/s): min= 8192, max= 8192, per=17.65%, avg=8192.00, stdev= 0.00, samples=1 00:21:57.642 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:21:57.642 lat (usec) : 50=0.03%, 250=61.20%, 500=38.70%, 750=0.03% 00:21:57.642 lat (msec) : 2=0.03%, 50=0.03% 00:21:57.642 cpu : usr=1.00%, sys=3.50%, ctx=3952, majf=0, minf=10 00:21:57.642 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:57.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:57.642 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:57.642 issued rwts: total=1898,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:57.642 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:57.642 job3: (groupid=0, jobs=1): err= 0: pid=77369: Mon Jul 15 10:03:11 2024 00:21:57.642 read: IOPS=1897, BW=7588KiB/s (7771kB/s)(7596KiB/1001msec) 00:21:57.642 slat (nsec): min=5405, max=37850, avg=8129.79, stdev=2817.94 00:21:57.642 clat (usec): min=110, max=40688, avg=282.78, stdev=927.94 00:21:57.642 lat (usec): min=124, max=40696, avg=290.91, stdev=927.95 00:21:57.642 clat percentiles (usec): 00:21:57.642 | 1.00th=[ 219], 5.00th=[ 233], 10.00th=[ 241], 20.00th=[ 247], 00:21:57.642 | 30.00th=[ 253], 40.00th=[ 258], 50.00th=[ 262], 60.00th=[ 265], 00:21:57.642 | 70.00th=[ 269], 80.00th=[ 273], 90.00th=[ 285], 95.00th=[ 293], 00:21:57.642 | 99.00th=[ 314], 99.50th=[ 347], 99.90th=[ 578], 99.95th=[40633], 00:21:57.642 | 99.99th=[40633] 00:21:57.642 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:21:57.642 slat (usec): min=6, max=190, avg=13.63, stdev= 9.42 00:21:57.642 clat (usec): min=24, max=1434, avg=202.71, stdev=41.07 00:21:57.642 lat (usec): min=111, max=1447, avg=216.35, stdev=40.29 00:21:57.642 clat percentiles (usec): 00:21:57.642 | 1.00th=[ 117], 5.00th=[ 159], 10.00th=[ 176], 20.00th=[ 186], 00:21:57.642 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 202], 60.00th=[ 206], 00:21:57.642 | 70.00th=[ 212], 80.00th=[ 219], 90.00th=[ 231], 95.00th=[ 245], 00:21:57.642 | 99.00th=[ 310], 99.50th=[ 322], 99.90th=[ 351], 99.95th=[ 367], 00:21:57.642 | 99.99th=[ 1434] 00:21:57.642 bw ( KiB/s): min= 8192, max= 8192, per=17.65%, avg=8192.00, stdev= 0.00, samples=1 00:21:57.642 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:21:57.642 lat (usec) : 50=0.05%, 250=61.26%, 500=38.61%, 750=0.03% 00:21:57.642 lat (msec) : 2=0.03%, 50=0.03% 00:21:57.642 cpu : usr=1.20%, sys=3.20%, ctx=3956, majf=0, minf=9 00:21:57.642 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:57.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:57.642 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:57.642 issued rwts: total=1899,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:57.642 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:57.642 00:21:57.642 Run status group 0 (all jobs): 00:21:57.642 READ: bw=42.8MiB/s (44.9MB/s), 7584KiB/s-14.0MiB/s (7766kB/s-14.7MB/s), io=42.8MiB (44.9MB), run=1000-1001msec 00:21:57.642 WRITE: bw=45.3MiB/s (47.5MB/s), 8184KiB/s-14.8MiB/s (8380kB/s-15.5MB/s), io=45.4MiB (47.6MB), run=1000-1001msec 00:21:57.642 00:21:57.642 Disk stats (read/write): 00:21:57.642 nvme0n1: ios=3122/3433, merge=0/0, ticks=458/405, in_queue=863, util=89.97% 00:21:57.642 nvme0n2: ios=3121/3376, merge=0/0, ticks=459/402, in_queue=861, util=90.32% 00:21:57.642 nvme0n3: ios=1573/1943, merge=0/0, ticks=461/393, in_queue=854, util=90.33% 00:21:57.642 nvme0n4: ios=1557/1943, merge=0/0, ticks=453/384, in_queue=837, util=90.19% 00:21:57.642 10:03:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:21:57.642 [global] 00:21:57.642 thread=1 00:21:57.642 invalidate=1 00:21:57.642 rw=write 00:21:57.642 time_based=1 00:21:57.642 runtime=1 00:21:57.642 ioengine=libaio 00:21:57.642 direct=1 00:21:57.642 bs=4096 00:21:57.642 iodepth=128 00:21:57.642 norandommap=0 00:21:57.642 numjobs=1 00:21:57.642 00:21:57.642 verify_dump=1 00:21:57.642 verify_backlog=512 00:21:57.642 verify_state_save=0 00:21:57.642 do_verify=1 00:21:57.642 verify=crc32c-intel 00:21:57.642 [job0] 00:21:57.642 filename=/dev/nvme0n1 00:21:57.642 [job1] 00:21:57.642 filename=/dev/nvme0n2 00:21:57.642 [job2] 00:21:57.642 filename=/dev/nvme0n3 00:21:57.642 [job3] 00:21:57.642 filename=/dev/nvme0n4 00:21:57.642 Could not set queue depth (nvme0n1) 00:21:57.642 Could not set queue depth (nvme0n2) 00:21:57.642 Could not set queue depth (nvme0n3) 00:21:57.642 Could not set queue depth (nvme0n4) 00:21:57.935 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:57.935 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:57.935 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:57.935 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:57.935 fio-3.35 00:21:57.935 Starting 4 threads 00:21:58.891 00:21:58.891 job0: (groupid=0, jobs=1): err= 0: pid=77428: Mon Jul 15 10:03:12 2024 00:21:58.891 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:21:58.891 slat (usec): min=7, max=19127, avg=189.71, stdev=1070.38 00:21:58.891 clat (usec): min=11736, max=68230, avg=25726.75, stdev=12478.70 00:21:58.891 lat (usec): min=13798, max=68252, avg=25916.46, stdev=12530.69 00:21:58.891 clat percentiles (usec): 00:21:58.891 | 1.00th=[13304], 5.00th=[14877], 10.00th=[15795], 20.00th=[16319], 00:21:58.891 | 30.00th=[16581], 40.00th=[17171], 50.00th=[18744], 60.00th=[25297], 00:21:58.891 | 70.00th=[31327], 80.00th=[35390], 90.00th=[42730], 95.00th=[49021], 00:21:58.891 | 99.00th=[68682], 99.50th=[68682], 99.90th=[68682], 99.95th=[68682], 00:21:58.891 | 99.99th=[68682] 00:21:58.891 write: IOPS=2869, BW=11.2MiB/s (11.8MB/s)(11.3MiB/1004msec); 0 zone resets 00:21:58.891 slat (usec): min=22, max=13582, avg=170.57, stdev=909.63 00:21:58.891 clat (usec): min=189, max=52840, avg=20490.41, stdev=9407.51 00:21:58.891 lat (usec): min=8315, max=52874, avg=20660.98, stdev=9441.18 00:21:58.891 clat percentiles (usec): 00:21:58.891 | 1.00th=[ 9372], 5.00th=[12649], 10.00th=[13042], 20.00th=[13304], 00:21:58.891 | 30.00th=[13698], 40.00th=[14353], 50.00th=[15401], 60.00th=[22152], 00:21:58.891 | 70.00th=[23987], 80.00th=[26346], 90.00th=[28967], 95.00th=[39584], 00:21:58.891 | 99.00th=[52691], 99.50th=[52691], 99.90th=[52691], 99.95th=[52691], 00:21:58.891 | 99.99th=[52691] 00:21:58.891 bw ( KiB/s): min= 9699, max=12319, per=20.65%, avg=11009.00, stdev=1852.62, samples=2 00:21:58.891 iops : min= 2424, max= 3079, avg=2751.50, stdev=463.15, samples=2 00:21:58.891 lat (usec) : 250=0.02% 00:21:58.891 lat (msec) : 10=0.61%, 20=54.79%, 50=40.01%, 100=4.58% 00:21:58.891 cpu : usr=2.59%, sys=10.87%, ctx=185, majf=0, minf=17 00:21:58.891 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:21:58.891 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:58.891 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:58.891 issued rwts: total=2560,2881,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:58.891 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:58.891 job1: (groupid=0, jobs=1): err= 0: pid=77429: Mon Jul 15 10:03:12 2024 00:21:58.891 read: IOPS=2041, BW=8167KiB/s (8364kB/s)(8192KiB/1003msec) 00:21:58.891 slat (usec): min=4, max=8576, avg=223.24, stdev=861.41 00:21:58.891 clat (usec): min=19089, max=48075, avg=27503.78, stdev=4393.13 00:21:58.891 lat (usec): min=19128, max=48139, avg=27727.02, stdev=4445.86 00:21:58.891 clat percentiles (usec): 00:21:58.891 | 1.00th=[20317], 5.00th=[21365], 10.00th=[22414], 20.00th=[23725], 00:21:58.891 | 30.00th=[25035], 40.00th=[26084], 50.00th=[27132], 60.00th=[28181], 00:21:58.891 | 70.00th=[29230], 80.00th=[30278], 90.00th=[32113], 95.00th=[35390], 00:21:58.891 | 99.00th=[41681], 99.50th=[44303], 99.90th=[44827], 99.95th=[44827], 00:21:58.891 | 99.99th=[47973] 00:21:58.891 write: IOPS=2228, BW=8913KiB/s (9127kB/s)(8940KiB/1003msec); 0 zone resets 00:21:58.891 slat (usec): min=5, max=6697, avg=232.87, stdev=775.85 00:21:58.891 clat (usec): min=2573, max=56221, avg=31406.74, stdev=12997.35 00:21:58.891 lat (usec): min=2606, max=56247, avg=31639.61, stdev=13091.87 00:21:58.891 clat percentiles (usec): 00:21:58.891 | 1.00th=[ 3458], 5.00th=[17695], 10.00th=[19530], 20.00th=[20055], 00:21:58.891 | 30.00th=[20841], 40.00th=[22676], 50.00th=[24249], 60.00th=[38011], 00:21:58.891 | 70.00th=[41681], 80.00th=[46400], 90.00th=[49546], 95.00th=[51643], 00:21:58.891 | 99.00th=[55313], 99.50th=[55313], 99.90th=[56361], 99.95th=[56361], 00:21:58.891 | 99.99th=[56361] 00:21:58.891 bw ( KiB/s): min= 6419, max=10411, per=15.78%, avg=8415.00, stdev=2822.77, samples=2 00:21:58.891 iops : min= 1604, max= 2602, avg=2103.00, stdev=705.69, samples=2 00:21:58.891 lat (msec) : 4=0.54%, 10=0.82%, 20=7.82%, 50=85.78%, 100=5.04% 00:21:58.891 cpu : usr=2.20%, sys=9.38%, ctx=709, majf=0, minf=5 00:21:58.891 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:21:58.891 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:58.891 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:58.891 issued rwts: total=2048,2235,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:58.891 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:58.891 job2: (groupid=0, jobs=1): err= 0: pid=77430: Mon Jul 15 10:03:12 2024 00:21:58.891 read: IOPS=5902, BW=23.1MiB/s (24.2MB/s)(23.1MiB/1001msec) 00:21:58.891 slat (usec): min=6, max=3121, avg=78.58, stdev=308.51 00:21:58.891 clat (usec): min=455, max=13247, avg=10691.39, stdev=1201.85 00:21:58.891 lat (usec): min=1045, max=14623, avg=10769.97, stdev=1185.95 00:21:58.891 clat percentiles (usec): 00:21:58.891 | 1.00th=[ 6521], 5.00th=[ 8848], 10.00th=[ 9241], 20.00th=[ 9765], 00:21:58.891 | 30.00th=[10290], 40.00th=[10683], 50.00th=[10945], 60.00th=[11207], 00:21:58.891 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11863], 95.00th=[12125], 00:21:58.891 | 99.00th=[12518], 99.50th=[12780], 99.90th=[13173], 99.95th=[13173], 00:21:58.891 | 99.99th=[13304] 00:21:58.891 write: IOPS=6137, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1001msec); 0 zone resets 00:21:58.891 slat (usec): min=20, max=2577, avg=77.40, stdev=262.23 00:21:58.891 clat (usec): min=8025, max=13129, avg=10315.53, stdev=1005.71 00:21:58.891 lat (usec): min=8079, max=13155, avg=10392.94, stdev=1004.23 00:21:58.891 clat percentiles (usec): 00:21:58.891 | 1.00th=[ 8356], 5.00th=[ 8717], 10.00th=[ 8979], 20.00th=[ 9241], 00:21:58.891 | 30.00th=[ 9503], 40.00th=[10159], 50.00th=[10421], 60.00th=[10814], 00:21:58.891 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11469], 95.00th=[11863], 00:21:58.891 | 99.00th=[12387], 99.50th=[12518], 99.90th=[12911], 99.95th=[13173], 00:21:58.891 | 99.99th=[13173] 00:21:58.891 bw ( KiB/s): min=24526, max=24625, per=46.10%, avg=24575.50, stdev=70.00, samples=2 00:21:58.891 iops : min= 6131, max= 6156, avg=6143.50, stdev=17.68, samples=2 00:21:58.891 lat (usec) : 500=0.01% 00:21:58.891 lat (msec) : 2=0.05%, 4=0.27%, 10=30.45%, 20=69.23% 00:21:58.891 cpu : usr=6.50%, sys=24.80%, ctx=744, majf=0, minf=2 00:21:58.891 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:21:58.891 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:58.891 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:58.891 issued rwts: total=5908,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:58.891 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:58.891 job3: (groupid=0, jobs=1): err= 0: pid=77431: Mon Jul 15 10:03:12 2024 00:21:58.891 read: IOPS=2037, BW=8151KiB/s (8347kB/s)(8192KiB/1005msec) 00:21:58.891 slat (usec): min=4, max=9210, avg=212.34, stdev=834.38 00:21:58.891 clat (usec): min=20214, max=45750, avg=28477.98, stdev=4503.23 00:21:58.891 lat (usec): min=21538, max=46593, avg=28690.32, stdev=4539.33 00:21:58.891 clat percentiles (usec): 00:21:58.891 | 1.00th=[21890], 5.00th=[22938], 10.00th=[23725], 20.00th=[24511], 00:21:58.891 | 30.00th=[25822], 40.00th=[26870], 50.00th=[27657], 60.00th=[28705], 00:21:58.891 | 70.00th=[29492], 80.00th=[31065], 90.00th=[34341], 95.00th=[39060], 00:21:58.891 | 99.00th=[42206], 99.50th=[42730], 99.90th=[44827], 99.95th=[45351], 00:21:58.891 | 99.99th=[45876] 00:21:58.891 write: IOPS=2124, BW=8498KiB/s (8701kB/s)(8540KiB/1005msec); 0 zone resets 00:21:58.891 slat (usec): min=10, max=8214, avg=255.62, stdev=842.99 00:21:58.891 clat (usec): min=2722, max=58585, avg=32011.30, stdev=12331.09 00:21:58.891 lat (usec): min=7865, max=58645, avg=32266.92, stdev=12426.63 00:21:58.891 clat percentiles (usec): 00:21:58.891 | 1.00th=[ 8717], 5.00th=[18744], 10.00th=[19792], 20.00th=[20055], 00:21:58.891 | 30.00th=[21365], 40.00th=[23987], 50.00th=[27132], 60.00th=[37487], 00:21:58.891 | 70.00th=[41157], 80.00th=[44827], 90.00th=[50070], 95.00th=[52691], 00:21:58.891 | 99.00th=[55313], 99.50th=[55837], 99.90th=[56361], 99.95th=[56886], 00:21:58.891 | 99.99th=[58459] 00:21:58.891 bw ( KiB/s): min= 6666, max= 9704, per=15.35%, avg=8185.00, stdev=2148.19, samples=2 00:21:58.891 iops : min= 1666, max= 2426, avg=2046.00, stdev=537.40, samples=2 00:21:58.891 lat (msec) : 4=0.02%, 10=1.00%, 20=6.60%, 50=87.11%, 100=5.26% 00:21:58.892 cpu : usr=2.09%, sys=9.26%, ctx=661, majf=0, minf=5 00:21:58.892 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:21:58.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:58.892 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:58.892 issued rwts: total=2048,2135,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:58.892 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:58.892 00:21:58.892 Run status group 0 (all jobs): 00:21:58.892 READ: bw=48.8MiB/s (51.2MB/s), 8151KiB/s-23.1MiB/s (8347kB/s-24.2MB/s), io=49.1MiB (51.5MB), run=1001-1005msec 00:21:58.892 WRITE: bw=52.1MiB/s (54.6MB/s), 8498KiB/s-24.0MiB/s (8701kB/s-25.1MB/s), io=52.3MiB (54.9MB), run=1001-1005msec 00:21:58.892 00:21:58.892 Disk stats (read/write): 00:21:58.892 nvme0n1: ios=2386/2560, merge=0/0, ticks=13529/11592, in_queue=25121, util=89.28% 00:21:58.892 nvme0n2: ios=1627/2048, merge=0/0, ticks=13961/19899, in_queue=33860, util=89.84% 00:21:58.892 nvme0n3: ios=5147/5482, merge=0/0, ticks=15482/13954, in_queue=29436, util=90.02% 00:21:58.892 nvme0n4: ios=1563/2048, merge=0/0, ticks=13107/20520, in_queue=33627, util=89.88% 00:21:58.892 10:03:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:21:58.892 [global] 00:21:58.892 thread=1 00:21:58.892 invalidate=1 00:21:58.892 rw=randwrite 00:21:58.892 time_based=1 00:21:58.892 runtime=1 00:21:58.892 ioengine=libaio 00:21:58.892 direct=1 00:21:58.892 bs=4096 00:21:58.892 iodepth=128 00:21:58.892 norandommap=0 00:21:58.892 numjobs=1 00:21:58.892 00:21:58.892 verify_dump=1 00:21:58.892 verify_backlog=512 00:21:58.892 verify_state_save=0 00:21:58.892 do_verify=1 00:21:58.892 verify=crc32c-intel 00:21:58.892 [job0] 00:21:58.892 filename=/dev/nvme0n1 00:21:58.892 [job1] 00:21:58.892 filename=/dev/nvme0n2 00:21:59.149 [job2] 00:21:59.149 filename=/dev/nvme0n3 00:21:59.149 [job3] 00:21:59.149 filename=/dev/nvme0n4 00:21:59.149 Could not set queue depth (nvme0n1) 00:21:59.149 Could not set queue depth (nvme0n2) 00:21:59.149 Could not set queue depth (nvme0n3) 00:21:59.149 Could not set queue depth (nvme0n4) 00:21:59.149 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:59.149 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:59.149 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:59.149 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:59.149 fio-3.35 00:21:59.149 Starting 4 threads 00:22:00.523 00:22:00.523 job0: (groupid=0, jobs=1): err= 0: pid=77490: Mon Jul 15 10:03:13 2024 00:22:00.523 read: IOPS=5689, BW=22.2MiB/s (23.3MB/s)(22.3MiB/1003msec) 00:22:00.523 slat (usec): min=15, max=4226, avg=82.58, stdev=380.35 00:22:00.523 clat (usec): min=676, max=16985, avg=10972.68, stdev=1268.45 00:22:00.523 lat (usec): min=4249, max=17003, avg=11055.26, stdev=1285.07 00:22:00.523 clat percentiles (usec): 00:22:00.523 | 1.00th=[ 5735], 5.00th=[ 8979], 10.00th=[ 9503], 20.00th=[10159], 00:22:00.523 | 30.00th=[10683], 40.00th=[10814], 50.00th=[11076], 60.00th=[11207], 00:22:00.523 | 70.00th=[11469], 80.00th=[11731], 90.00th=[12387], 95.00th=[12780], 00:22:00.523 | 99.00th=[13960], 99.50th=[14615], 99.90th=[15795], 99.95th=[15795], 00:22:00.523 | 99.99th=[16909] 00:22:00.523 write: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec); 0 zone resets 00:22:00.523 slat (usec): min=20, max=3515, avg=76.59, stdev=327.46 00:22:00.523 clat (usec): min=6554, max=14233, avg=10448.37, stdev=981.02 00:22:00.523 lat (usec): min=6587, max=14537, avg=10524.96, stdev=961.85 00:22:00.523 clat percentiles (usec): 00:22:00.523 | 1.00th=[ 7504], 5.00th=[ 8094], 10.00th=[ 9110], 20.00th=[10028], 00:22:00.523 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10683], 60.00th=[10814], 00:22:00.523 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11338], 95.00th=[11469], 00:22:00.523 | 99.00th=[12125], 99.50th=[13173], 99.90th=[13960], 99.95th=[13960], 00:22:00.523 | 99.99th=[14222] 00:22:00.523 bw ( KiB/s): min=24152, max=24625, per=35.18%, avg=24388.50, stdev=334.46, samples=2 00:22:00.523 iops : min= 6038, max= 6156, avg=6097.00, stdev=83.44, samples=2 00:22:00.523 lat (usec) : 750=0.01% 00:22:00.523 lat (msec) : 10=18.40%, 20=81.59% 00:22:00.523 cpu : usr=5.49%, sys=24.45%, ctx=473, majf=0, minf=6 00:22:00.523 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:22:00.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:00.523 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:00.523 issued rwts: total=5707,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:00.523 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:00.523 job1: (groupid=0, jobs=1): err= 0: pid=77491: Mon Jul 15 10:03:13 2024 00:22:00.523 read: IOPS=3020, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1017msec) 00:22:00.523 slat (usec): min=3, max=23505, avg=155.06, stdev=1030.30 00:22:00.523 clat (usec): min=6031, max=76785, avg=17120.86, stdev=10703.27 00:22:00.523 lat (usec): min=6049, max=76814, avg=17275.92, stdev=10831.85 00:22:00.523 clat percentiles (usec): 00:22:00.523 | 1.00th=[ 6718], 5.00th=[10290], 10.00th=[10552], 20.00th=[11207], 00:22:00.523 | 30.00th=[11731], 40.00th=[12518], 50.00th=[13173], 60.00th=[14222], 00:22:00.523 | 70.00th=[15401], 80.00th=[18744], 90.00th=[28967], 95.00th=[35914], 00:22:00.523 | 99.00th=[65799], 99.50th=[73925], 99.90th=[77071], 99.95th=[77071], 00:22:00.523 | 99.99th=[77071] 00:22:00.523 write: IOPS=3396, BW=13.3MiB/s (13.9MB/s)(13.5MiB/1017msec); 0 zone resets 00:22:00.523 slat (usec): min=4, max=16177, avg=143.71, stdev=777.18 00:22:00.523 clat (usec): min=4442, max=76680, avg=22043.44, stdev=14053.65 00:22:00.523 lat (usec): min=4486, max=76693, avg=22187.15, stdev=14119.82 00:22:00.523 clat percentiles (usec): 00:22:00.523 | 1.00th=[ 5997], 5.00th=[ 9765], 10.00th=[10683], 20.00th=[11469], 00:22:00.523 | 30.00th=[11863], 40.00th=[16712], 50.00th=[20841], 60.00th=[22676], 00:22:00.523 | 70.00th=[24249], 80.00th=[25035], 90.00th=[38011], 95.00th=[63177], 00:22:00.523 | 99.00th=[67634], 99.50th=[67634], 99.90th=[73925], 99.95th=[77071], 00:22:00.523 | 99.99th=[77071] 00:22:00.523 bw ( KiB/s): min=13256, max=13360, per=19.20%, avg=13308.00, stdev=73.54, samples=2 00:22:00.523 iops : min= 3314, max= 3340, avg=3327.00, stdev=18.38, samples=2 00:22:00.523 lat (msec) : 10=3.98%, 20=58.55%, 50=32.13%, 100=5.33% 00:22:00.523 cpu : usr=2.56%, sys=10.53%, ctx=370, majf=0, minf=5 00:22:00.523 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:22:00.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:00.523 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:00.523 issued rwts: total=3072,3454,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:00.523 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:00.523 job2: (groupid=0, jobs=1): err= 0: pid=77492: Mon Jul 15 10:03:13 2024 00:22:00.523 read: IOPS=5079, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1008msec) 00:22:00.523 slat (usec): min=6, max=11575, avg=93.97, stdev=587.88 00:22:00.523 clat (usec): min=4449, max=24193, avg=12876.70, stdev=2470.05 00:22:00.523 lat (usec): min=4467, max=24222, avg=12970.67, stdev=2503.58 00:22:00.523 clat percentiles (usec): 00:22:00.523 | 1.00th=[ 6718], 5.00th=[ 9765], 10.00th=[10159], 20.00th=[11207], 00:22:00.523 | 30.00th=[11600], 40.00th=[12256], 50.00th=[12649], 60.00th=[13173], 00:22:00.523 | 70.00th=[13566], 80.00th=[14222], 90.00th=[15401], 95.00th=[17433], 00:22:00.523 | 99.00th=[21890], 99.50th=[22938], 99.90th=[23462], 99.95th=[24249], 00:22:00.523 | 99.99th=[24249] 00:22:00.523 write: IOPS=5421, BW=21.2MiB/s (22.2MB/s)(21.3MiB/1008msec); 0 zone resets 00:22:00.523 slat (usec): min=7, max=8073, avg=86.49, stdev=516.00 00:22:00.523 clat (usec): min=1742, max=24059, avg=11298.16, stdev=1644.37 00:22:00.523 lat (usec): min=4583, max=24072, avg=11384.66, stdev=1720.20 00:22:00.523 clat percentiles (usec): 00:22:00.523 | 1.00th=[ 5276], 5.00th=[ 8455], 10.00th=[ 9765], 20.00th=[10421], 00:22:00.523 | 30.00th=[10814], 40.00th=[11207], 50.00th=[11469], 60.00th=[11600], 00:22:00.523 | 70.00th=[11863], 80.00th=[12387], 90.00th=[13042], 95.00th=[13566], 00:22:00.523 | 99.00th=[14091], 99.50th=[16909], 99.90th=[19530], 99.95th=[23462], 00:22:00.523 | 99.99th=[23987] 00:22:00.523 bw ( KiB/s): min=20952, max=21744, per=30.80%, avg=21348.00, stdev=560.03, samples=2 00:22:00.523 iops : min= 5238, max= 5436, avg=5337.00, stdev=140.01, samples=2 00:22:00.523 lat (msec) : 2=0.01%, 10=10.39%, 20=88.60%, 50=1.00% 00:22:00.523 cpu : usr=5.16%, sys=20.06%, ctx=474, majf=0, minf=3 00:22:00.523 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:22:00.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:00.523 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:00.523 issued rwts: total=5120,5465,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:00.523 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:00.523 job3: (groupid=0, jobs=1): err= 0: pid=77493: Mon Jul 15 10:03:13 2024 00:22:00.523 read: IOPS=2106, BW=8426KiB/s (8629kB/s)(8536KiB/1013msec) 00:22:00.523 slat (usec): min=6, max=16720, avg=147.30, stdev=912.39 00:22:00.523 clat (usec): min=6004, max=35037, avg=17832.99, stdev=6124.67 00:22:00.523 lat (usec): min=6023, max=35769, avg=17980.30, stdev=6182.14 00:22:00.523 clat percentiles (usec): 00:22:00.523 | 1.00th=[ 6915], 5.00th=[10814], 10.00th=[11076], 20.00th=[12911], 00:22:00.523 | 30.00th=[13566], 40.00th=[13960], 50.00th=[15795], 60.00th=[19268], 00:22:00.523 | 70.00th=[20841], 80.00th=[23200], 90.00th=[25560], 95.00th=[30802], 00:22:00.523 | 99.00th=[33817], 99.50th=[34866], 99.90th=[34866], 99.95th=[34866], 00:22:00.523 | 99.99th=[34866] 00:22:00.523 write: IOPS=2527, BW=9.87MiB/s (10.4MB/s)(10.0MiB/1013msec); 0 zone resets 00:22:00.523 slat (usec): min=8, max=18242, avg=259.19, stdev=1236.17 00:22:00.523 clat (msec): min=4, max=119, avg=35.21, stdev=24.89 00:22:00.523 lat (msec): min=4, max=119, avg=35.47, stdev=25.02 00:22:00.523 clat percentiles (msec): 00:22:00.523 | 1.00th=[ 7], 5.00th=[ 11], 10.00th=[ 20], 20.00th=[ 21], 00:22:00.523 | 30.00th=[ 23], 40.00th=[ 24], 50.00th=[ 25], 60.00th=[ 26], 00:22:00.523 | 70.00th=[ 34], 80.00th=[ 52], 90.00th=[ 71], 95.00th=[ 99], 00:22:00.523 | 99.00th=[ 116], 99.50th=[ 118], 99.90th=[ 121], 99.95th=[ 121], 00:22:00.523 | 99.99th=[ 121] 00:22:00.523 bw ( KiB/s): min= 9992, max=10180, per=14.55%, avg=10086.00, stdev=132.94, samples=2 00:22:00.523 iops : min= 2498, max= 2545, avg=2521.50, stdev=33.23, samples=2 00:22:00.523 lat (msec) : 10=3.88%, 20=31.32%, 50=53.20%, 100=8.93%, 250=2.68% 00:22:00.523 cpu : usr=3.26%, sys=7.02%, ctx=330, majf=0, minf=7 00:22:00.523 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:22:00.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:00.523 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:00.523 issued rwts: total=2134,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:00.523 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:00.523 00:22:00.523 Run status group 0 (all jobs): 00:22:00.523 READ: bw=61.6MiB/s (64.6MB/s), 8426KiB/s-22.2MiB/s (8629kB/s-23.3MB/s), io=62.6MiB (65.7MB), run=1003-1017msec 00:22:00.523 WRITE: bw=67.7MiB/s (71.0MB/s), 9.87MiB/s-23.9MiB/s (10.4MB/s-25.1MB/s), io=68.8MiB (72.2MB), run=1003-1017msec 00:22:00.523 00:22:00.523 Disk stats (read/write): 00:22:00.523 nvme0n1: ios=5170/5173, merge=0/0, ticks=25290/20204, in_queue=45494, util=90.08% 00:22:00.523 nvme0n2: ios=2609/2927, merge=0/0, ticks=41841/58184, in_queue=100025, util=89.53% 00:22:00.523 nvme0n3: ios=4565/4608, merge=0/0, ticks=52400/46603, in_queue=99003, util=89.60% 00:22:00.523 nvme0n4: ios=2074/2055, merge=0/0, ticks=34101/72505, in_queue=106606, util=91.28% 00:22:00.523 10:03:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:22:00.523 10:03:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=77506 00:22:00.523 10:03:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:22:00.523 10:03:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:22:00.523 [global] 00:22:00.523 thread=1 00:22:00.523 invalidate=1 00:22:00.523 rw=read 00:22:00.523 time_based=1 00:22:00.523 runtime=10 00:22:00.523 ioengine=libaio 00:22:00.523 direct=1 00:22:00.523 bs=4096 00:22:00.523 iodepth=1 00:22:00.523 norandommap=1 00:22:00.523 numjobs=1 00:22:00.523 00:22:00.523 [job0] 00:22:00.523 filename=/dev/nvme0n1 00:22:00.523 [job1] 00:22:00.523 filename=/dev/nvme0n2 00:22:00.523 [job2] 00:22:00.523 filename=/dev/nvme0n3 00:22:00.523 [job3] 00:22:00.523 filename=/dev/nvme0n4 00:22:00.523 Could not set queue depth (nvme0n1) 00:22:00.523 Could not set queue depth (nvme0n2) 00:22:00.523 Could not set queue depth (nvme0n3) 00:22:00.523 Could not set queue depth (nvme0n4) 00:22:00.780 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:00.780 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:00.780 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:00.780 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:00.780 fio-3.35 00:22:00.780 Starting 4 threads 00:22:04.059 10:03:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:22:04.059 fio: pid=77549, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:22:04.059 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=42553344, buflen=4096 00:22:04.059 10:03:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:22:04.059 fio: pid=77548, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:22:04.059 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=74383360, buflen=4096 00:22:04.059 10:03:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:22:04.059 10:03:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:22:04.059 fio: pid=77546, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:22:04.059 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=22536192, buflen=4096 00:22:04.059 10:03:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:22:04.059 10:03:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:22:04.319 fio: pid=77547, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:22:04.319 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=62316544, buflen=4096 00:22:04.319 00:22:04.319 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77546: Mon Jul 15 10:03:17 2024 00:22:04.319 read: IOPS=6755, BW=26.4MiB/s (27.7MB/s)(85.5MiB/3240msec) 00:22:04.319 slat (usec): min=6, max=13784, avg=10.86, stdev=162.42 00:22:04.319 clat (usec): min=88, max=2797, avg=136.39, stdev=26.77 00:22:04.319 lat (usec): min=101, max=14062, avg=147.24, stdev=165.52 00:22:04.319 clat percentiles (usec): 00:22:04.319 | 1.00th=[ 113], 5.00th=[ 118], 10.00th=[ 122], 20.00th=[ 127], 00:22:04.319 | 30.00th=[ 130], 40.00th=[ 133], 50.00th=[ 137], 60.00th=[ 139], 00:22:04.319 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 151], 95.00th=[ 157], 00:22:04.319 | 99.00th=[ 167], 99.50th=[ 174], 99.90th=[ 217], 99.95th=[ 318], 00:22:04.319 | 99.99th=[ 1450] 00:22:04.319 bw ( KiB/s): min=26019, max=27720, per=35.79%, avg=27003.17, stdev=574.83, samples=6 00:22:04.319 iops : min= 6504, max= 6930, avg=6750.67, stdev=143.96, samples=6 00:22:04.319 lat (usec) : 100=0.11%, 250=99.82%, 500=0.04%, 750=0.01%, 1000=0.01% 00:22:04.319 lat (msec) : 2=0.01%, 4=0.01% 00:22:04.320 cpu : usr=0.86%, sys=4.85%, ctx=21892, majf=0, minf=1 00:22:04.320 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:04.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:04.320 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:04.320 issued rwts: total=21887,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:04.320 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:04.320 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77547: Mon Jul 15 10:03:17 2024 00:22:04.320 read: IOPS=4372, BW=17.1MiB/s (17.9MB/s)(59.4MiB/3480msec) 00:22:04.320 slat (usec): min=6, max=14458, avg=12.78, stdev=203.70 00:22:04.320 clat (usec): min=86, max=876, avg=215.15, stdev=66.41 00:22:04.320 lat (usec): min=93, max=14590, avg=227.93, stdev=213.18 00:22:04.320 clat percentiles (usec): 00:22:04.320 | 1.00th=[ 99], 5.00th=[ 109], 10.00th=[ 115], 20.00th=[ 127], 00:22:04.320 | 30.00th=[ 147], 40.00th=[ 235], 50.00th=[ 247], 60.00th=[ 253], 00:22:04.320 | 70.00th=[ 260], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 285], 00:22:04.320 | 99.00th=[ 314], 99.50th=[ 347], 99.90th=[ 383], 99.95th=[ 449], 00:22:04.320 | 99.99th=[ 742] 00:22:04.320 bw ( KiB/s): min=14696, max=20203, per=21.18%, avg=15984.50, stdev=2102.17, samples=6 00:22:04.320 iops : min= 3674, max= 5050, avg=3996.00, stdev=525.24, samples=6 00:22:04.320 lat (usec) : 100=1.27%, 250=52.96%, 500=45.74%, 750=0.02%, 1000=0.01% 00:22:04.320 cpu : usr=0.63%, sys=3.13%, ctx=15222, majf=0, minf=1 00:22:04.320 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:04.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:04.320 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:04.320 issued rwts: total=15215,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:04.320 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:04.320 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77548: Mon Jul 15 10:03:17 2024 00:22:04.320 read: IOPS=5968, BW=23.3MiB/s (24.4MB/s)(70.9MiB/3043msec) 00:22:04.320 slat (usec): min=6, max=13375, avg=10.75, stdev=137.70 00:22:04.320 clat (usec): min=118, max=1503, avg=156.03, stdev=28.80 00:22:04.320 lat (usec): min=125, max=13523, avg=166.78, stdev=140.74 00:22:04.320 clat percentiles (usec): 00:22:04.320 | 1.00th=[ 129], 5.00th=[ 135], 10.00th=[ 141], 20.00th=[ 145], 00:22:04.320 | 30.00th=[ 149], 40.00th=[ 153], 50.00th=[ 155], 60.00th=[ 159], 00:22:04.320 | 70.00th=[ 161], 80.00th=[ 165], 90.00th=[ 172], 95.00th=[ 178], 00:22:04.320 | 99.00th=[ 192], 99.50th=[ 198], 99.90th=[ 273], 99.95th=[ 652], 00:22:04.320 | 99.99th=[ 1450] 00:22:04.320 bw ( KiB/s): min=23608, max=24296, per=31.73%, avg=23944.00, stdev=332.89, samples=5 00:22:04.320 iops : min= 5902, max= 6074, avg=5986.00, stdev=83.22, samples=5 00:22:04.320 lat (usec) : 250=99.87%, 500=0.06%, 750=0.02%, 1000=0.01% 00:22:04.320 lat (msec) : 2=0.04% 00:22:04.320 cpu : usr=0.79%, sys=4.64%, ctx=18163, majf=0, minf=1 00:22:04.320 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:04.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:04.320 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:04.320 issued rwts: total=18161,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:04.320 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:04.320 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77549: Mon Jul 15 10:03:17 2024 00:22:04.320 read: IOPS=3644, BW=14.2MiB/s (14.9MB/s)(40.6MiB/2851msec) 00:22:04.320 slat (usec): min=6, max=134, avg=12.97, stdev= 5.77 00:22:04.320 clat (usec): min=136, max=8067, avg=260.21, stdev=184.87 00:22:04.320 lat (usec): min=145, max=8077, avg=273.17, stdev=184.98 00:22:04.320 clat percentiles (usec): 00:22:04.320 | 1.00th=[ 210], 5.00th=[ 223], 10.00th=[ 229], 20.00th=[ 237], 00:22:04.320 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 253], 60.00th=[ 258], 00:22:04.320 | 70.00th=[ 262], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 285], 00:22:04.320 | 99.00th=[ 334], 99.50th=[ 359], 99.90th=[ 3294], 99.95th=[ 3916], 00:22:04.320 | 99.99th=[ 7898] 00:22:04.320 bw ( KiB/s): min=14208, max=15200, per=19.37%, avg=14619.20, stdev=381.89, samples=5 00:22:04.320 iops : min= 3552, max= 3800, avg=3654.80, stdev=95.47, samples=5 00:22:04.320 lat (usec) : 250=45.63%, 500=54.14%, 750=0.03%, 1000=0.02% 00:22:04.320 lat (msec) : 2=0.01%, 4=0.12%, 10=0.05% 00:22:04.320 cpu : usr=1.09%, sys=3.51%, ctx=10399, majf=0, minf=2 00:22:04.320 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:04.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:04.320 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:04.320 issued rwts: total=10390,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:04.320 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:04.320 00:22:04.320 Run status group 0 (all jobs): 00:22:04.320 READ: bw=73.7MiB/s (77.3MB/s), 14.2MiB/s-26.4MiB/s (14.9MB/s-27.7MB/s), io=256MiB (269MB), run=2851-3480msec 00:22:04.320 00:22:04.320 Disk stats (read/write): 00:22:04.320 nvme0n1: ios=21085/0, merge=0/0, ticks=2946/0, in_queue=2946, util=95.19% 00:22:04.320 nvme0n2: ios=14432/0, merge=0/0, ticks=3241/0, in_queue=3241, util=95.36% 00:22:04.320 nvme0n3: ios=17353/0, merge=0/0, ticks=2743/0, in_queue=2743, util=96.68% 00:22:04.320 nvme0n4: ios=9637/0, merge=0/0, ticks=2497/0, in_queue=2497, util=96.12% 00:22:04.320 10:03:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:22:04.320 10:03:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:22:04.580 10:03:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:22:04.580 10:03:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:22:04.904 10:03:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:22:04.904 10:03:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:22:04.904 10:03:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:22:04.904 10:03:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:22:05.163 10:03:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:22:05.163 10:03:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:22:05.422 10:03:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:22:05.422 10:03:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 77506 00:22:05.422 10:03:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:22:05.422 10:03:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:05.422 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:05.422 10:03:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:22:05.422 10:03:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:22:05.422 10:03:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:05.422 10:03:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:22:05.422 10:03:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:22:05.422 10:03:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:05.422 nvmf hotplug test: fio failed as expected 00:22:05.422 10:03:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:22:05.422 10:03:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:22:05.422 10:03:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:22:05.422 10:03:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:05.682 10:03:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:22:05.682 10:03:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:22:05.682 10:03:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:22:05.682 10:03:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:22:05.682 10:03:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:22:05.682 10:03:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:05.682 10:03:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:22:05.682 10:03:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:05.682 10:03:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:22:05.682 10:03:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:05.682 10:03:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:05.682 rmmod nvme_tcp 00:22:05.682 rmmod nvme_fabrics 00:22:05.682 rmmod nvme_keyring 00:22:05.682 10:03:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:05.682 10:03:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:22:05.682 10:03:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:22:05.682 10:03:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 77020 ']' 00:22:05.682 10:03:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 77020 00:22:05.682 10:03:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 77020 ']' 00:22:05.682 10:03:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 77020 00:22:05.682 10:03:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:22:05.682 10:03:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:05.682 10:03:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77020 00:22:05.682 killing process with pid 77020 00:22:05.682 10:03:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:05.682 10:03:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:05.682 10:03:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77020' 00:22:05.682 10:03:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 77020 00:22:05.682 10:03:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 77020 00:22:05.942 10:03:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:05.942 10:03:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:05.942 10:03:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:05.942 10:03:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:05.942 10:03:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:05.942 10:03:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:05.942 10:03:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:05.942 10:03:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:05.942 10:03:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:05.942 00:22:05.942 real 0m18.728s 00:22:05.942 user 1m11.885s 00:22:05.942 sys 0m7.646s 00:22:05.942 10:03:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:05.942 10:03:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.942 ************************************ 00:22:05.942 END TEST nvmf_fio_target 00:22:05.942 ************************************ 00:22:06.202 10:03:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:06.202 10:03:19 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:22:06.202 10:03:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:06.202 10:03:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:06.202 10:03:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:06.202 ************************************ 00:22:06.202 START TEST nvmf_bdevio 00:22:06.202 ************************************ 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:22:06.203 * Looking for test storage... 00:22:06.203 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:06.203 Cannot find device "nvmf_tgt_br" 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:06.203 Cannot find device "nvmf_tgt_br2" 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:06.203 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:06.463 Cannot find device "nvmf_tgt_br" 00:22:06.463 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:22:06.463 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:06.463 Cannot find device "nvmf_tgt_br2" 00:22:06.463 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:22:06.463 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:06.463 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:06.463 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:06.463 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:06.463 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:22:06.463 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:06.463 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:06.463 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:22:06.463 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:06.463 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:06.463 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:06.463 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:06.463 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:06.463 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:06.463 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:06.463 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:06.463 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:06.463 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:06.463 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:06.463 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:06.463 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:06.463 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:06.463 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:06.463 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:06.463 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:06.464 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:06.464 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:06.464 10:03:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:06.464 10:03:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:06.464 10:03:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:06.464 10:03:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:06.464 10:03:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:06.464 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:06.464 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:22:06.464 00:22:06.464 --- 10.0.0.2 ping statistics --- 00:22:06.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:06.464 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:22:06.464 10:03:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:06.464 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:06.464 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.032 ms 00:22:06.464 00:22:06.464 --- 10.0.0.3 ping statistics --- 00:22:06.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:06.464 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:22:06.464 10:03:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:06.464 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:06.464 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:22:06.464 00:22:06.464 --- 10.0.0.1 ping statistics --- 00:22:06.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:06.464 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:22:06.464 10:03:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:06.464 10:03:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:22:06.464 10:03:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:06.464 10:03:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:06.464 10:03:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:06.464 10:03:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:06.464 10:03:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:06.464 10:03:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:06.464 10:03:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:06.724 10:03:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:06.724 10:03:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:06.724 10:03:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:06.724 10:03:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:06.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:06.724 10:03:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=77867 00:22:06.724 10:03:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 77867 00:22:06.724 10:03:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 77867 ']' 00:22:06.724 10:03:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:06.724 10:03:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:06.724 10:03:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:06.724 10:03:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:06.724 10:03:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:06.724 10:03:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:22:06.724 [2024-07-15 10:03:20.122070] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:22:06.724 [2024-07-15 10:03:20.122141] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:06.724 [2024-07-15 10:03:20.265265] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:06.984 [2024-07-15 10:03:20.374715] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:06.984 [2024-07-15 10:03:20.374865] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:06.984 [2024-07-15 10:03:20.374900] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:06.984 [2024-07-15 10:03:20.374926] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:06.984 [2024-07-15 10:03:20.374941] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:06.984 [2024-07-15 10:03:20.375203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:06.984 [2024-07-15 10:03:20.375544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:06.984 [2024-07-15 10:03:20.375426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:22:06.984 [2024-07-15 10:03:20.375552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:22:07.554 10:03:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:07.554 10:03:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:22:07.554 10:03:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:07.554 10:03:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:07.554 10:03:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:07.554 10:03:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:07.554 10:03:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:07.554 10:03:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.554 10:03:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:07.554 [2024-07-15 10:03:21.071536] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:07.554 10:03:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.554 10:03:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:07.554 10:03:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.554 10:03:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:07.554 Malloc0 00:22:07.554 10:03:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.554 10:03:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:07.554 10:03:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.554 10:03:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:07.554 10:03:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.554 10:03:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:07.554 10:03:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.554 10:03:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:07.813 10:03:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.813 10:03:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:07.813 10:03:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.813 10:03:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:07.813 [2024-07-15 10:03:21.145453] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:07.813 10:03:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.813 10:03:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:07.813 10:03:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:22:07.813 10:03:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:22:07.813 10:03:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:22:07.813 10:03:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:07.813 10:03:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:07.813 { 00:22:07.813 "params": { 00:22:07.813 "name": "Nvme$subsystem", 00:22:07.813 "trtype": "$TEST_TRANSPORT", 00:22:07.813 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.813 "adrfam": "ipv4", 00:22:07.813 "trsvcid": "$NVMF_PORT", 00:22:07.813 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.813 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.813 "hdgst": ${hdgst:-false}, 00:22:07.813 "ddgst": ${ddgst:-false} 00:22:07.813 }, 00:22:07.813 "method": "bdev_nvme_attach_controller" 00:22:07.813 } 00:22:07.813 EOF 00:22:07.813 )") 00:22:07.813 10:03:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:22:07.813 10:03:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:22:07.813 10:03:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:22:07.813 10:03:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:07.813 "params": { 00:22:07.813 "name": "Nvme1", 00:22:07.813 "trtype": "tcp", 00:22:07.813 "traddr": "10.0.0.2", 00:22:07.813 "adrfam": "ipv4", 00:22:07.813 "trsvcid": "4420", 00:22:07.813 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:07.813 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:07.813 "hdgst": false, 00:22:07.813 "ddgst": false 00:22:07.813 }, 00:22:07.813 "method": "bdev_nvme_attach_controller" 00:22:07.813 }' 00:22:07.813 [2024-07-15 10:03:21.202168] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:22:07.813 [2024-07-15 10:03:21.202327] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77921 ] 00:22:07.813 [2024-07-15 10:03:21.341443] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:08.072 [2024-07-15 10:03:21.446755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:08.072 [2024-07-15 10:03:21.446788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:08.072 [2024-07-15 10:03:21.446787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:08.072 I/O targets: 00:22:08.072 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:08.072 00:22:08.072 00:22:08.072 CUnit - A unit testing framework for C - Version 2.1-3 00:22:08.072 http://cunit.sourceforge.net/ 00:22:08.072 00:22:08.072 00:22:08.072 Suite: bdevio tests on: Nvme1n1 00:22:08.072 Test: blockdev write read block ...passed 00:22:08.330 Test: blockdev write zeroes read block ...passed 00:22:08.330 Test: blockdev write zeroes read no split ...passed 00:22:08.330 Test: blockdev write zeroes read split ...passed 00:22:08.330 Test: blockdev write zeroes read split partial ...passed 00:22:08.330 Test: blockdev reset ...[2024-07-15 10:03:21.721967] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:08.330 [2024-07-15 10:03:21.722164] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe19180 (9): Bad file descriptor 00:22:08.330 [2024-07-15 10:03:21.732691] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:08.330 passed 00:22:08.330 Test: blockdev write read 8 blocks ...passed 00:22:08.330 Test: blockdev write read size > 128k ...passed 00:22:08.330 Test: blockdev write read invalid size ...passed 00:22:08.330 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:08.330 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:08.330 Test: blockdev write read max offset ...passed 00:22:08.330 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:08.330 Test: blockdev writev readv 8 blocks ...passed 00:22:08.330 Test: blockdev writev readv 30 x 1block ...passed 00:22:08.330 Test: blockdev writev readv block ...passed 00:22:08.330 Test: blockdev writev readv size > 128k ...passed 00:22:08.330 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:08.330 Test: blockdev comparev and writev ...[2024-07-15 10:03:21.906803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:08.330 [2024-07-15 10:03:21.906849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.330 [2024-07-15 10:03:21.906864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:08.330 [2024-07-15 10:03:21.906871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:08.330 [2024-07-15 10:03:21.907139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:08.330 [2024-07-15 10:03:21.907149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:08.330 [2024-07-15 10:03:21.907161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:08.331 [2024-07-15 10:03:21.907168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:08.331 [2024-07-15 10:03:21.907397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:08.331 [2024-07-15 10:03:21.907406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:08.331 [2024-07-15 10:03:21.907417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:08.331 [2024-07-15 10:03:21.907424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:08.331 [2024-07-15 10:03:21.907639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:08.331 [2024-07-15 10:03:21.907648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:08.331 [2024-07-15 10:03:21.907659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:08.331 [2024-07-15 10:03:21.907666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:08.590 passed 00:22:08.590 Test: blockdev nvme passthru rw ...passed 00:22:08.590 Test: blockdev nvme passthru vendor specific ...passed 00:22:08.590 Test: blockdev nvme admin passthru ...[2024-07-15 10:03:21.991151] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:08.590 [2024-07-15 10:03:21.991191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:08.590 [2024-07-15 10:03:21.991296] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:08.590 [2024-07-15 10:03:21.991308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:08.590 [2024-07-15 10:03:21.991393] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:08.590 [2024-07-15 10:03:21.991401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:08.590 [2024-07-15 10:03:21.991482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:08.590 [2024-07-15 10:03:21.991490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:08.590 passed 00:22:08.590 Test: blockdev copy ...passed 00:22:08.590 00:22:08.590 Run Summary: Type Total Ran Passed Failed Inactive 00:22:08.590 suites 1 1 n/a 0 0 00:22:08.590 tests 23 23 23 0 0 00:22:08.590 asserts 152 152 152 0 n/a 00:22:08.590 00:22:08.590 Elapsed time = 0.888 seconds 00:22:08.849 10:03:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:08.849 10:03:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.849 10:03:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:08.849 10:03:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.849 10:03:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:08.849 10:03:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:22:08.849 10:03:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:08.849 10:03:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:22:08.849 10:03:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:08.849 10:03:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:22:08.849 10:03:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:08.849 10:03:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:08.849 rmmod nvme_tcp 00:22:08.849 rmmod nvme_fabrics 00:22:08.849 rmmod nvme_keyring 00:22:08.849 10:03:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:08.849 10:03:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:22:08.849 10:03:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:22:08.849 10:03:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 77867 ']' 00:22:08.849 10:03:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 77867 00:22:08.849 10:03:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 77867 ']' 00:22:08.849 10:03:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 77867 00:22:08.850 10:03:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:22:08.850 10:03:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:08.850 10:03:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77867 00:22:08.850 10:03:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:22:08.850 10:03:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:22:08.850 killing process with pid 77867 00:22:08.850 10:03:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77867' 00:22:08.850 10:03:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 77867 00:22:08.850 10:03:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 77867 00:22:09.109 10:03:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:09.109 10:03:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:09.109 10:03:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:09.109 10:03:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:09.109 10:03:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:09.109 10:03:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:09.109 10:03:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:09.109 10:03:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:09.109 10:03:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:09.109 00:22:09.109 real 0m3.081s 00:22:09.109 user 0m10.829s 00:22:09.109 sys 0m0.752s 00:22:09.109 10:03:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:09.109 ************************************ 00:22:09.109 END TEST nvmf_bdevio 00:22:09.109 ************************************ 00:22:09.109 10:03:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:09.370 10:03:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:09.370 10:03:22 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:22:09.370 10:03:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:09.370 10:03:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:09.370 10:03:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:09.370 ************************************ 00:22:09.370 START TEST nvmf_auth_target 00:22:09.370 ************************************ 00:22:09.370 10:03:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:22:09.370 * Looking for test storage... 00:22:09.370 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:09.370 10:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:09.370 10:03:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:22:09.370 10:03:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:09.370 10:03:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:09.370 10:03:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:09.370 10:03:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:09.370 10:03:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:09.370 10:03:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:09.370 10:03:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:09.370 10:03:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:09.370 10:03:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:09.370 10:03:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:09.370 10:03:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:22:09.370 10:03:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:22:09.370 10:03:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:09.370 10:03:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:09.370 10:03:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:09.370 10:03:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:09.370 10:03:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:09.370 10:03:22 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:09.370 10:03:22 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:09.370 10:03:22 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:09.370 10:03:22 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.370 10:03:22 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.370 10:03:22 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.370 10:03:22 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:22:09.370 10:03:22 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.370 10:03:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:22:09.370 10:03:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:09.370 10:03:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:09.370 10:03:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:09.370 10:03:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:09.370 10:03:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:09.370 10:03:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:09.370 10:03:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:09.370 10:03:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:09.370 10:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:22:09.370 10:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:22:09.370 10:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:22:09.370 10:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:22:09.370 10:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:22:09.370 10:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:22:09.370 10:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:22:09.370 10:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:22:09.370 10:03:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:09.370 10:03:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:09.370 10:03:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:09.370 10:03:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:09.370 10:03:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:09.370 10:03:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:09.370 10:03:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:09.370 10:03:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:09.370 10:03:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:09.370 10:03:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:09.370 10:03:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:09.370 10:03:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:09.370 10:03:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:09.370 10:03:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:09.371 10:03:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:09.371 10:03:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:09.371 10:03:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:09.371 10:03:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:09.371 10:03:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:09.371 10:03:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:09.371 10:03:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:09.371 10:03:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:09.371 10:03:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:09.371 10:03:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:09.371 10:03:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:09.371 10:03:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:09.371 10:03:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:09.371 10:03:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:09.371 Cannot find device "nvmf_tgt_br" 00:22:09.371 10:03:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:22:09.371 10:03:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:09.371 Cannot find device "nvmf_tgt_br2" 00:22:09.371 10:03:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:22:09.371 10:03:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:09.371 10:03:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:09.371 Cannot find device "nvmf_tgt_br" 00:22:09.371 10:03:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:22:09.371 10:03:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:09.629 Cannot find device "nvmf_tgt_br2" 00:22:09.629 10:03:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:22:09.629 10:03:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:09.629 10:03:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:09.629 10:03:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:09.629 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:09.629 10:03:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:22:09.629 10:03:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:09.629 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:09.629 10:03:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:22:09.630 10:03:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:09.630 10:03:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:09.630 10:03:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:09.630 10:03:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:09.630 10:03:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:09.630 10:03:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:09.630 10:03:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:09.630 10:03:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:09.630 10:03:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:09.630 10:03:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:09.630 10:03:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:09.630 10:03:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:09.630 10:03:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:09.630 10:03:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:09.630 10:03:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:09.630 10:03:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:09.630 10:03:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:09.630 10:03:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:09.630 10:03:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:09.887 10:03:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:09.887 10:03:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:09.888 10:03:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:09.888 10:03:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:09.888 10:03:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:09.888 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:09.888 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:22:09.888 00:22:09.888 --- 10.0.0.2 ping statistics --- 00:22:09.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:09.888 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:22:09.888 10:03:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:09.888 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:09.888 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.094 ms 00:22:09.888 00:22:09.888 --- 10.0.0.3 ping statistics --- 00:22:09.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:09.888 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:22:09.888 10:03:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:09.888 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:09.888 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:22:09.888 00:22:09.888 --- 10.0.0.1 ping statistics --- 00:22:09.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:09.888 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:22:09.888 10:03:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:09.888 10:03:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:22:09.888 10:03:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:09.888 10:03:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:09.888 10:03:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:09.888 10:03:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:09.888 10:03:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:09.888 10:03:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:09.888 10:03:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:09.888 10:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:22:09.888 10:03:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:09.888 10:03:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:09.888 10:03:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.888 10:03:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=78104 00:22:09.888 10:03:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:22:09.888 10:03:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 78104 00:22:09.888 10:03:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 78104 ']' 00:22:09.888 10:03:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:09.888 10:03:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:09.888 10:03:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:09.888 10:03:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:09.888 10:03:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.824 10:03:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:10.824 10:03:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:22:10.824 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:10.824 10:03:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:10.824 10:03:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.824 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:10.824 10:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=78148 00:22:10.824 10:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:22:10.824 10:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:10.824 10:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:22:10.824 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:22:10.824 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:10.824 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:22:10.824 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:22:10.824 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:22:10.824 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:10.824 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=5d136f5d0711e2e2aef62e4ef3acfd8d57e8411ebeb3afae 00:22:10.824 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:22:10.824 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.iaQ 00:22:10.824 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 5d136f5d0711e2e2aef62e4ef3acfd8d57e8411ebeb3afae 0 00:22:10.824 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 5d136f5d0711e2e2aef62e4ef3acfd8d57e8411ebeb3afae 0 00:22:10.824 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:22:10.824 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:10.824 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=5d136f5d0711e2e2aef62e4ef3acfd8d57e8411ebeb3afae 00:22:10.824 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:22:10.824 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.iaQ 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.iaQ 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.iaQ 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=534ab7498b1f37322db2d67ffa5d56954f036343528aa0782d129bb12bc411f2 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.gGO 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 534ab7498b1f37322db2d67ffa5d56954f036343528aa0782d129bb12bc411f2 3 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 534ab7498b1f37322db2d67ffa5d56954f036343528aa0782d129bb12bc411f2 3 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=534ab7498b1f37322db2d67ffa5d56954f036343528aa0782d129bb12bc411f2 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.gGO 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.gGO 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.gGO 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c3b84759e4dd693ecddb295ce900b593 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.6sw 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c3b84759e4dd693ecddb295ce900b593 1 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c3b84759e4dd693ecddb295ce900b593 1 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c3b84759e4dd693ecddb295ce900b593 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.6sw 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.6sw 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.6sw 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b73a450efca2aad5347198cc996d2cff226289bd3a761fe1 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.ewW 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b73a450efca2aad5347198cc996d2cff226289bd3a761fe1 2 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b73a450efca2aad5347198cc996d2cff226289bd3a761fe1 2 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b73a450efca2aad5347198cc996d2cff226289bd3a761fe1 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.ewW 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.ewW 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.ewW 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=654f3c9ad0eb48996c8d14b4617d21e5db6b1b5e05032368 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.wIw 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 654f3c9ad0eb48996c8d14b4617d21e5db6b1b5e05032368 2 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 654f3c9ad0eb48996c8d14b4617d21e5db6b1b5e05032368 2 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=654f3c9ad0eb48996c8d14b4617d21e5db6b1b5e05032368 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:22:11.084 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:22:11.343 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.wIw 00:22:11.343 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.wIw 00:22:11.343 10:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.wIw 00:22:11.343 10:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:22:11.343 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:22:11.343 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:11.343 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:22:11.343 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:22:11.343 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:22:11.343 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:11.343 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9b6a4f8208480b1543b64a7d1bec5471 00:22:11.343 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:22:11.343 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.uer 00:22:11.343 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9b6a4f8208480b1543b64a7d1bec5471 1 00:22:11.343 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9b6a4f8208480b1543b64a7d1bec5471 1 00:22:11.343 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:22:11.343 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:11.343 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9b6a4f8208480b1543b64a7d1bec5471 00:22:11.343 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:22:11.343 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:22:11.343 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.uer 00:22:11.343 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.uer 00:22:11.343 10:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.uer 00:22:11.343 10:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:22:11.343 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:22:11.343 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:11.343 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:22:11.343 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:22:11.343 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:22:11.343 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:11.343 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c2bb57b8df8f570b668adbf8d7911c9ff050e399072d1da97863b320143ab382 00:22:11.343 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:22:11.343 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.qtK 00:22:11.343 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c2bb57b8df8f570b668adbf8d7911c9ff050e399072d1da97863b320143ab382 3 00:22:11.343 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c2bb57b8df8f570b668adbf8d7911c9ff050e399072d1da97863b320143ab382 3 00:22:11.343 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:22:11.343 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:11.343 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c2bb57b8df8f570b668adbf8d7911c9ff050e399072d1da97863b320143ab382 00:22:11.343 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:22:11.343 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:22:11.343 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.qtK 00:22:11.343 10:03:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.qtK 00:22:11.343 10:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.qtK 00:22:11.343 10:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:22:11.343 10:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 78104 00:22:11.343 10:03:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 78104 ']' 00:22:11.343 10:03:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:11.343 10:03:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:11.344 10:03:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:11.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:11.344 10:03:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:11.344 10:03:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.602 10:03:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:11.602 10:03:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:22:11.602 10:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 78148 /var/tmp/host.sock 00:22:11.602 10:03:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 78148 ']' 00:22:11.602 10:03:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:22:11.602 10:03:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:11.602 10:03:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:22:11.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:22:11.602 10:03:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:11.602 10:03:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.860 10:03:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:11.860 10:03:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:22:11.860 10:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:22:11.860 10:03:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.860 10:03:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.860 10:03:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.860 10:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:22:11.860 10:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.iaQ 00:22:11.860 10:03:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.860 10:03:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.860 10:03:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.860 10:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.iaQ 00:22:11.860 10:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.iaQ 00:22:12.119 10:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.gGO ]] 00:22:12.119 10:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.gGO 00:22:12.119 10:03:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.119 10:03:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.119 10:03:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.119 10:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.gGO 00:22:12.119 10:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.gGO 00:22:12.377 10:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:22:12.377 10:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.6sw 00:22:12.377 10:03:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.377 10:03:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.377 10:03:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.377 10:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.6sw 00:22:12.378 10:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.6sw 00:22:12.635 10:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.ewW ]] 00:22:12.635 10:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ewW 00:22:12.635 10:03:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.635 10:03:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.635 10:03:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.635 10:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ewW 00:22:12.635 10:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ewW 00:22:12.635 10:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:22:12.635 10:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.wIw 00:22:12.635 10:03:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.635 10:03:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.635 10:03:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.635 10:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.wIw 00:22:12.635 10:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.wIw 00:22:12.894 10:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.uer ]] 00:22:12.894 10:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.uer 00:22:12.894 10:03:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.894 10:03:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.894 10:03:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.894 10:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.uer 00:22:12.894 10:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.uer 00:22:13.152 10:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:22:13.152 10:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.qtK 00:22:13.152 10:03:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.152 10:03:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.152 10:03:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.152 10:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.qtK 00:22:13.152 10:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.qtK 00:22:13.410 10:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:22:13.410 10:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:22:13.410 10:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:13.410 10:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:13.410 10:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:13.410 10:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:13.669 10:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:22:13.669 10:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:13.669 10:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:13.669 10:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:13.669 10:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:13.669 10:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:13.669 10:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:13.669 10:03:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.669 10:03:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.669 10:03:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.669 10:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:13.669 10:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:13.927 00:22:13.927 10:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:13.927 10:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:13.927 10:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:14.186 10:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.186 10:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:14.186 10:03:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.186 10:03:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.186 10:03:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.186 10:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:14.186 { 00:22:14.186 "auth": { 00:22:14.186 "dhgroup": "null", 00:22:14.186 "digest": "sha256", 00:22:14.186 "state": "completed" 00:22:14.186 }, 00:22:14.186 "cntlid": 1, 00:22:14.186 "listen_address": { 00:22:14.186 "adrfam": "IPv4", 00:22:14.186 "traddr": "10.0.0.2", 00:22:14.186 "trsvcid": "4420", 00:22:14.186 "trtype": "TCP" 00:22:14.186 }, 00:22:14.186 "peer_address": { 00:22:14.186 "adrfam": "IPv4", 00:22:14.186 "traddr": "10.0.0.1", 00:22:14.186 "trsvcid": "33358", 00:22:14.186 "trtype": "TCP" 00:22:14.186 }, 00:22:14.186 "qid": 0, 00:22:14.186 "state": "enabled", 00:22:14.186 "thread": "nvmf_tgt_poll_group_000" 00:22:14.186 } 00:22:14.186 ]' 00:22:14.186 10:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:14.186 10:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:14.186 10:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:14.186 10:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:14.186 10:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:14.186 10:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:14.186 10:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:14.186 10:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:14.445 10:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:00:NWQxMzZmNWQwNzExZTJlMmFlZjYyZTRlZjNhY2ZkOGQ1N2U4NDExZWJlYjNhZmFl2ifiLg==: --dhchap-ctrl-secret DHHC-1:03:NTM0YWI3NDk4YjFmMzczMjJkYjJkNjdmZmE1ZDU2OTU0ZjAzNjM0MzUyOGFhMDc4MmQxMjliYjEyYmM0MTFmMrepI4o=: 00:22:18.663 10:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:18.663 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:18.663 10:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:22:18.663 10:03:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.663 10:03:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.663 10:03:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.663 10:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:18.663 10:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:18.663 10:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:18.663 10:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:22:18.663 10:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:18.663 10:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:18.663 10:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:18.663 10:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:18.663 10:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:18.663 10:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:18.663 10:03:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.663 10:03:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.663 10:03:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.663 10:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:18.663 10:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:18.663 00:22:18.663 10:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:18.663 10:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:18.663 10:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.922 10:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.922 10:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:18.922 10:03:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.922 10:03:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.922 10:03:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.922 10:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:18.922 { 00:22:18.922 "auth": { 00:22:18.922 "dhgroup": "null", 00:22:18.922 "digest": "sha256", 00:22:18.922 "state": "completed" 00:22:18.922 }, 00:22:18.922 "cntlid": 3, 00:22:18.922 "listen_address": { 00:22:18.922 "adrfam": "IPv4", 00:22:18.922 "traddr": "10.0.0.2", 00:22:18.922 "trsvcid": "4420", 00:22:18.922 "trtype": "TCP" 00:22:18.922 }, 00:22:18.922 "peer_address": { 00:22:18.922 "adrfam": "IPv4", 00:22:18.922 "traddr": "10.0.0.1", 00:22:18.922 "trsvcid": "50230", 00:22:18.922 "trtype": "TCP" 00:22:18.922 }, 00:22:18.922 "qid": 0, 00:22:18.922 "state": "enabled", 00:22:18.922 "thread": "nvmf_tgt_poll_group_000" 00:22:18.922 } 00:22:18.922 ]' 00:22:18.922 10:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:18.922 10:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:18.922 10:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:19.181 10:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:19.181 10:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:19.181 10:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:19.181 10:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:19.181 10:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:19.440 10:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:01:YzNiODQ3NTllNGRkNjkzZWNkZGIyOTVjZTkwMGI1OTNQ924+: --dhchap-ctrl-secret DHHC-1:02:YjczYTQ1MGVmY2EyYWFkNTM0NzE5OGNjOTk2ZDJjZmYyMjYyODliZDNhNzYxZmUxbh2Sew==: 00:22:20.005 10:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:20.005 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:20.005 10:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:22:20.005 10:03:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.005 10:03:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.005 10:03:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.005 10:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:20.005 10:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:20.005 10:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:20.263 10:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:22:20.263 10:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:20.263 10:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:20.263 10:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:20.263 10:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:20.263 10:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:20.263 10:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:20.263 10:03:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.263 10:03:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.263 10:03:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.263 10:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:20.263 10:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:20.522 00:22:20.522 10:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:20.522 10:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:20.522 10:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:20.780 10:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.780 10:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:20.780 10:03:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.780 10:03:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.780 10:03:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.780 10:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:20.780 { 00:22:20.780 "auth": { 00:22:20.780 "dhgroup": "null", 00:22:20.780 "digest": "sha256", 00:22:20.780 "state": "completed" 00:22:20.780 }, 00:22:20.780 "cntlid": 5, 00:22:20.780 "listen_address": { 00:22:20.780 "adrfam": "IPv4", 00:22:20.780 "traddr": "10.0.0.2", 00:22:20.780 "trsvcid": "4420", 00:22:20.780 "trtype": "TCP" 00:22:20.780 }, 00:22:20.780 "peer_address": { 00:22:20.780 "adrfam": "IPv4", 00:22:20.780 "traddr": "10.0.0.1", 00:22:20.780 "trsvcid": "50246", 00:22:20.780 "trtype": "TCP" 00:22:20.780 }, 00:22:20.780 "qid": 0, 00:22:20.780 "state": "enabled", 00:22:20.780 "thread": "nvmf_tgt_poll_group_000" 00:22:20.780 } 00:22:20.780 ]' 00:22:20.780 10:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:20.780 10:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:20.780 10:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:20.780 10:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:20.780 10:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:20.780 10:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:20.780 10:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:20.780 10:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:21.047 10:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:02:NjU0ZjNjOWFkMGViNDg5OTZjOGQxNGI0NjE3ZDIxZTVkYjZiMWI1ZTA1MDMyMzY4c2X49g==: --dhchap-ctrl-secret DHHC-1:01:OWI2YTRmODIwODQ4MGIxNTQzYjY0YTdkMWJlYzU0NzEP+U4Y: 00:22:21.614 10:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:21.614 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:21.614 10:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:22:21.614 10:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.614 10:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.614 10:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.614 10:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:21.614 10:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:21.614 10:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:21.874 10:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:22:21.874 10:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:21.874 10:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:21.874 10:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:21.874 10:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:21.874 10:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:21.874 10:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key3 00:22:21.874 10:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.874 10:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.874 10:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.874 10:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:21.874 10:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:22.133 00:22:22.133 10:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:22.133 10:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:22.133 10:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:22.391 10:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.391 10:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:22.391 10:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.391 10:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.391 10:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.391 10:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:22.391 { 00:22:22.391 "auth": { 00:22:22.391 "dhgroup": "null", 00:22:22.391 "digest": "sha256", 00:22:22.391 "state": "completed" 00:22:22.391 }, 00:22:22.391 "cntlid": 7, 00:22:22.391 "listen_address": { 00:22:22.391 "adrfam": "IPv4", 00:22:22.391 "traddr": "10.0.0.2", 00:22:22.391 "trsvcid": "4420", 00:22:22.391 "trtype": "TCP" 00:22:22.391 }, 00:22:22.391 "peer_address": { 00:22:22.391 "adrfam": "IPv4", 00:22:22.391 "traddr": "10.0.0.1", 00:22:22.391 "trsvcid": "50286", 00:22:22.391 "trtype": "TCP" 00:22:22.391 }, 00:22:22.391 "qid": 0, 00:22:22.391 "state": "enabled", 00:22:22.391 "thread": "nvmf_tgt_poll_group_000" 00:22:22.391 } 00:22:22.391 ]' 00:22:22.391 10:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:22.391 10:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:22.391 10:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:22.391 10:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:22.391 10:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:22.391 10:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:22.391 10:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:22.391 10:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:22.650 10:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:03:YzJiYjU3YjhkZjhmNTcwYjY2OGFkYmY4ZDc5MTFjOWZmMDUwZTM5OTA3MmQxZGE5Nzg2M2IzMjAxNDNhYjM4Mgzy6hM=: 00:22:23.219 10:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:23.219 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:23.219 10:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:22:23.219 10:03:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.219 10:03:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.219 10:03:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.219 10:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:23.219 10:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:23.219 10:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:23.219 10:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:23.479 10:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:22:23.479 10:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:23.479 10:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:23.479 10:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:23.479 10:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:23.479 10:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:23.479 10:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:23.479 10:03:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.479 10:03:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.479 10:03:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.479 10:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:23.479 10:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:23.738 00:22:23.738 10:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:23.738 10:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:23.738 10:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:23.998 10:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.998 10:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:23.998 10:03:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.998 10:03:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.998 10:03:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.998 10:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:23.998 { 00:22:23.998 "auth": { 00:22:23.998 "dhgroup": "ffdhe2048", 00:22:23.998 "digest": "sha256", 00:22:23.998 "state": "completed" 00:22:23.998 }, 00:22:23.998 "cntlid": 9, 00:22:23.998 "listen_address": { 00:22:23.998 "adrfam": "IPv4", 00:22:23.998 "traddr": "10.0.0.2", 00:22:23.998 "trsvcid": "4420", 00:22:23.998 "trtype": "TCP" 00:22:23.998 }, 00:22:23.998 "peer_address": { 00:22:23.998 "adrfam": "IPv4", 00:22:23.998 "traddr": "10.0.0.1", 00:22:23.998 "trsvcid": "50316", 00:22:23.998 "trtype": "TCP" 00:22:23.998 }, 00:22:23.998 "qid": 0, 00:22:23.998 "state": "enabled", 00:22:23.998 "thread": "nvmf_tgt_poll_group_000" 00:22:23.998 } 00:22:23.998 ]' 00:22:23.998 10:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:23.998 10:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:23.998 10:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:24.257 10:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:24.257 10:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:24.257 10:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:24.257 10:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:24.257 10:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:24.517 10:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:00:NWQxMzZmNWQwNzExZTJlMmFlZjYyZTRlZjNhY2ZkOGQ1N2U4NDExZWJlYjNhZmFl2ifiLg==: --dhchap-ctrl-secret DHHC-1:03:NTM0YWI3NDk4YjFmMzczMjJkYjJkNjdmZmE1ZDU2OTU0ZjAzNjM0MzUyOGFhMDc4MmQxMjliYjEyYmM0MTFmMrepI4o=: 00:22:25.092 10:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:25.092 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:25.092 10:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:22:25.092 10:03:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.092 10:03:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.092 10:03:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.092 10:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:25.092 10:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:25.092 10:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:25.092 10:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:22:25.092 10:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:25.092 10:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:25.092 10:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:25.092 10:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:25.092 10:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:25.092 10:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:25.092 10:03:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.092 10:03:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.092 10:03:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.092 10:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:25.092 10:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:25.362 00:22:25.621 10:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:25.621 10:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:25.621 10:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:25.621 10:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:25.621 10:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:25.621 10:03:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.621 10:03:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.880 10:03:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.880 10:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:25.880 { 00:22:25.880 "auth": { 00:22:25.880 "dhgroup": "ffdhe2048", 00:22:25.880 "digest": "sha256", 00:22:25.880 "state": "completed" 00:22:25.880 }, 00:22:25.880 "cntlid": 11, 00:22:25.880 "listen_address": { 00:22:25.880 "adrfam": "IPv4", 00:22:25.880 "traddr": "10.0.0.2", 00:22:25.880 "trsvcid": "4420", 00:22:25.880 "trtype": "TCP" 00:22:25.880 }, 00:22:25.880 "peer_address": { 00:22:25.880 "adrfam": "IPv4", 00:22:25.880 "traddr": "10.0.0.1", 00:22:25.880 "trsvcid": "50342", 00:22:25.880 "trtype": "TCP" 00:22:25.880 }, 00:22:25.880 "qid": 0, 00:22:25.880 "state": "enabled", 00:22:25.880 "thread": "nvmf_tgt_poll_group_000" 00:22:25.880 } 00:22:25.880 ]' 00:22:25.880 10:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:25.880 10:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:25.880 10:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:25.880 10:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:25.880 10:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:25.880 10:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:25.880 10:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:25.880 10:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:26.138 10:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:01:YzNiODQ3NTllNGRkNjkzZWNkZGIyOTVjZTkwMGI1OTNQ924+: --dhchap-ctrl-secret DHHC-1:02:YjczYTQ1MGVmY2EyYWFkNTM0NzE5OGNjOTk2ZDJjZmYyMjYyODliZDNhNzYxZmUxbh2Sew==: 00:22:26.703 10:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:26.703 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:26.703 10:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:22:26.703 10:03:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.703 10:03:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.703 10:03:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.703 10:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:26.703 10:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:26.703 10:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:26.963 10:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:22:26.963 10:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:26.963 10:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:26.963 10:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:26.963 10:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:26.963 10:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:26.963 10:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:26.963 10:03:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.963 10:03:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.963 10:03:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.963 10:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:26.963 10:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:27.222 00:22:27.222 10:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:27.222 10:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:27.222 10:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:27.480 10:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.480 10:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:27.480 10:03:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.480 10:03:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.480 10:03:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.480 10:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:27.480 { 00:22:27.480 "auth": { 00:22:27.480 "dhgroup": "ffdhe2048", 00:22:27.480 "digest": "sha256", 00:22:27.480 "state": "completed" 00:22:27.480 }, 00:22:27.480 "cntlid": 13, 00:22:27.480 "listen_address": { 00:22:27.480 "adrfam": "IPv4", 00:22:27.480 "traddr": "10.0.0.2", 00:22:27.480 "trsvcid": "4420", 00:22:27.480 "trtype": "TCP" 00:22:27.480 }, 00:22:27.480 "peer_address": { 00:22:27.480 "adrfam": "IPv4", 00:22:27.480 "traddr": "10.0.0.1", 00:22:27.480 "trsvcid": "50378", 00:22:27.480 "trtype": "TCP" 00:22:27.480 }, 00:22:27.480 "qid": 0, 00:22:27.480 "state": "enabled", 00:22:27.480 "thread": "nvmf_tgt_poll_group_000" 00:22:27.480 } 00:22:27.480 ]' 00:22:27.480 10:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:27.480 10:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:27.480 10:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:27.480 10:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:27.480 10:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:27.739 10:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:27.739 10:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:27.739 10:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:27.739 10:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:02:NjU0ZjNjOWFkMGViNDg5OTZjOGQxNGI0NjE3ZDIxZTVkYjZiMWI1ZTA1MDMyMzY4c2X49g==: --dhchap-ctrl-secret DHHC-1:01:OWI2YTRmODIwODQ4MGIxNTQzYjY0YTdkMWJlYzU0NzEP+U4Y: 00:22:28.674 10:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:28.674 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:28.674 10:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:22:28.674 10:03:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.674 10:03:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.674 10:03:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.674 10:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:28.674 10:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:28.674 10:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:28.674 10:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:22:28.674 10:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:28.674 10:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:28.674 10:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:28.674 10:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:28.674 10:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:28.674 10:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key3 00:22:28.674 10:03:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.674 10:03:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.674 10:03:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.674 10:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:28.674 10:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:28.930 00:22:28.930 10:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:28.930 10:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:28.930 10:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:29.187 10:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.187 10:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:29.187 10:03:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.187 10:03:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.187 10:03:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.187 10:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:29.187 { 00:22:29.187 "auth": { 00:22:29.187 "dhgroup": "ffdhe2048", 00:22:29.187 "digest": "sha256", 00:22:29.187 "state": "completed" 00:22:29.187 }, 00:22:29.187 "cntlid": 15, 00:22:29.187 "listen_address": { 00:22:29.187 "adrfam": "IPv4", 00:22:29.187 "traddr": "10.0.0.2", 00:22:29.187 "trsvcid": "4420", 00:22:29.187 "trtype": "TCP" 00:22:29.187 }, 00:22:29.187 "peer_address": { 00:22:29.187 "adrfam": "IPv4", 00:22:29.187 "traddr": "10.0.0.1", 00:22:29.187 "trsvcid": "60122", 00:22:29.187 "trtype": "TCP" 00:22:29.187 }, 00:22:29.187 "qid": 0, 00:22:29.187 "state": "enabled", 00:22:29.187 "thread": "nvmf_tgt_poll_group_000" 00:22:29.187 } 00:22:29.187 ]' 00:22:29.187 10:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:29.187 10:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:29.187 10:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:29.187 10:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:29.460 10:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:29.460 10:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:29.460 10:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:29.460 10:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:29.725 10:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:03:YzJiYjU3YjhkZjhmNTcwYjY2OGFkYmY4ZDc5MTFjOWZmMDUwZTM5OTA3MmQxZGE5Nzg2M2IzMjAxNDNhYjM4Mgzy6hM=: 00:22:30.292 10:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:30.292 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:30.292 10:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:22:30.292 10:03:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.292 10:03:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.292 10:03:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.292 10:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:30.292 10:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:30.292 10:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:30.292 10:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:30.292 10:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:22:30.292 10:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:30.292 10:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:30.292 10:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:30.292 10:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:30.292 10:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:30.292 10:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:30.292 10:03:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.292 10:03:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.292 10:03:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.292 10:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:30.292 10:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:30.551 00:22:30.810 10:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:30.810 10:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:30.810 10:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:30.810 10:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.810 10:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:30.810 10:03:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.810 10:03:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.810 10:03:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.810 10:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:30.810 { 00:22:30.810 "auth": { 00:22:30.810 "dhgroup": "ffdhe3072", 00:22:30.810 "digest": "sha256", 00:22:30.810 "state": "completed" 00:22:30.810 }, 00:22:30.810 "cntlid": 17, 00:22:30.810 "listen_address": { 00:22:30.810 "adrfam": "IPv4", 00:22:30.810 "traddr": "10.0.0.2", 00:22:30.810 "trsvcid": "4420", 00:22:30.810 "trtype": "TCP" 00:22:30.810 }, 00:22:30.810 "peer_address": { 00:22:30.810 "adrfam": "IPv4", 00:22:30.810 "traddr": "10.0.0.1", 00:22:30.810 "trsvcid": "60162", 00:22:30.810 "trtype": "TCP" 00:22:30.810 }, 00:22:30.810 "qid": 0, 00:22:30.810 "state": "enabled", 00:22:30.810 "thread": "nvmf_tgt_poll_group_000" 00:22:30.810 } 00:22:30.810 ]' 00:22:30.810 10:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:31.068 10:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:31.068 10:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:31.068 10:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:31.068 10:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:31.068 10:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:31.068 10:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:31.068 10:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:31.327 10:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:00:NWQxMzZmNWQwNzExZTJlMmFlZjYyZTRlZjNhY2ZkOGQ1N2U4NDExZWJlYjNhZmFl2ifiLg==: --dhchap-ctrl-secret DHHC-1:03:NTM0YWI3NDk4YjFmMzczMjJkYjJkNjdmZmE1ZDU2OTU0ZjAzNjM0MzUyOGFhMDc4MmQxMjliYjEyYmM0MTFmMrepI4o=: 00:22:31.895 10:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:31.895 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:31.895 10:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:22:31.895 10:03:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.895 10:03:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.895 10:03:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.895 10:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:31.895 10:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:31.895 10:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:32.154 10:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:22:32.154 10:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:32.154 10:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:32.154 10:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:32.154 10:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:32.154 10:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:32.154 10:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:32.154 10:03:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.154 10:03:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.154 10:03:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.155 10:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:32.155 10:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:32.414 00:22:32.414 10:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:32.414 10:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:32.414 10:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:32.674 10:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.674 10:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:32.674 10:03:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.674 10:03:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.674 10:03:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.674 10:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:32.674 { 00:22:32.674 "auth": { 00:22:32.674 "dhgroup": "ffdhe3072", 00:22:32.674 "digest": "sha256", 00:22:32.674 "state": "completed" 00:22:32.674 }, 00:22:32.674 "cntlid": 19, 00:22:32.674 "listen_address": { 00:22:32.674 "adrfam": "IPv4", 00:22:32.674 "traddr": "10.0.0.2", 00:22:32.674 "trsvcid": "4420", 00:22:32.674 "trtype": "TCP" 00:22:32.674 }, 00:22:32.674 "peer_address": { 00:22:32.674 "adrfam": "IPv4", 00:22:32.674 "traddr": "10.0.0.1", 00:22:32.674 "trsvcid": "60192", 00:22:32.674 "trtype": "TCP" 00:22:32.674 }, 00:22:32.674 "qid": 0, 00:22:32.674 "state": "enabled", 00:22:32.674 "thread": "nvmf_tgt_poll_group_000" 00:22:32.674 } 00:22:32.674 ]' 00:22:32.674 10:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:32.674 10:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:32.674 10:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:32.934 10:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:32.934 10:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:32.934 10:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:32.934 10:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:32.934 10:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:33.193 10:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:01:YzNiODQ3NTllNGRkNjkzZWNkZGIyOTVjZTkwMGI1OTNQ924+: --dhchap-ctrl-secret DHHC-1:02:YjczYTQ1MGVmY2EyYWFkNTM0NzE5OGNjOTk2ZDJjZmYyMjYyODliZDNhNzYxZmUxbh2Sew==: 00:22:33.762 10:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:33.762 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:33.762 10:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:22:33.762 10:03:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.762 10:03:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.762 10:03:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.762 10:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:33.762 10:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:33.762 10:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:34.022 10:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:22:34.022 10:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:34.022 10:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:34.022 10:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:34.022 10:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:34.022 10:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:34.022 10:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:34.022 10:03:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.022 10:03:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.022 10:03:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.022 10:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:34.022 10:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:34.336 00:22:34.336 10:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:34.336 10:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:34.336 10:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:34.336 10:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.336 10:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:34.336 10:03:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.336 10:03:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.336 10:03:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.336 10:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:34.336 { 00:22:34.336 "auth": { 00:22:34.336 "dhgroup": "ffdhe3072", 00:22:34.336 "digest": "sha256", 00:22:34.336 "state": "completed" 00:22:34.336 }, 00:22:34.336 "cntlid": 21, 00:22:34.336 "listen_address": { 00:22:34.336 "adrfam": "IPv4", 00:22:34.336 "traddr": "10.0.0.2", 00:22:34.336 "trsvcid": "4420", 00:22:34.336 "trtype": "TCP" 00:22:34.336 }, 00:22:34.336 "peer_address": { 00:22:34.336 "adrfam": "IPv4", 00:22:34.336 "traddr": "10.0.0.1", 00:22:34.336 "trsvcid": "60214", 00:22:34.336 "trtype": "TCP" 00:22:34.336 }, 00:22:34.336 "qid": 0, 00:22:34.336 "state": "enabled", 00:22:34.336 "thread": "nvmf_tgt_poll_group_000" 00:22:34.336 } 00:22:34.336 ]' 00:22:34.336 10:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:34.595 10:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:34.595 10:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:34.595 10:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:34.595 10:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:34.595 10:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:34.595 10:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:34.595 10:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:34.854 10:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:02:NjU0ZjNjOWFkMGViNDg5OTZjOGQxNGI0NjE3ZDIxZTVkYjZiMWI1ZTA1MDMyMzY4c2X49g==: --dhchap-ctrl-secret DHHC-1:01:OWI2YTRmODIwODQ4MGIxNTQzYjY0YTdkMWJlYzU0NzEP+U4Y: 00:22:35.423 10:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:35.423 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:35.423 10:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:22:35.423 10:03:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.423 10:03:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.423 10:03:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.423 10:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:35.423 10:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:35.423 10:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:35.683 10:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:22:35.683 10:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:35.683 10:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:35.683 10:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:35.683 10:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:35.683 10:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:35.683 10:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key3 00:22:35.683 10:03:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.683 10:03:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.683 10:03:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.683 10:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:35.683 10:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:35.943 00:22:35.943 10:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:35.943 10:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:35.943 10:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:36.202 10:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.202 10:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:36.202 10:03:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.202 10:03:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.202 10:03:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.202 10:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:36.202 { 00:22:36.202 "auth": { 00:22:36.202 "dhgroup": "ffdhe3072", 00:22:36.202 "digest": "sha256", 00:22:36.202 "state": "completed" 00:22:36.202 }, 00:22:36.202 "cntlid": 23, 00:22:36.202 "listen_address": { 00:22:36.202 "adrfam": "IPv4", 00:22:36.202 "traddr": "10.0.0.2", 00:22:36.202 "trsvcid": "4420", 00:22:36.202 "trtype": "TCP" 00:22:36.202 }, 00:22:36.202 "peer_address": { 00:22:36.202 "adrfam": "IPv4", 00:22:36.202 "traddr": "10.0.0.1", 00:22:36.202 "trsvcid": "60250", 00:22:36.202 "trtype": "TCP" 00:22:36.202 }, 00:22:36.202 "qid": 0, 00:22:36.202 "state": "enabled", 00:22:36.202 "thread": "nvmf_tgt_poll_group_000" 00:22:36.202 } 00:22:36.202 ]' 00:22:36.202 10:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:36.202 10:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:36.202 10:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:36.202 10:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:36.202 10:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:36.202 10:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:36.203 10:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:36.203 10:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:36.462 10:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:03:YzJiYjU3YjhkZjhmNTcwYjY2OGFkYmY4ZDc5MTFjOWZmMDUwZTM5OTA3MmQxZGE5Nzg2M2IzMjAxNDNhYjM4Mgzy6hM=: 00:22:37.031 10:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:37.031 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:37.031 10:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:22:37.031 10:03:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.031 10:03:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.031 10:03:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.031 10:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:37.031 10:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:37.031 10:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:37.031 10:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:37.291 10:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:22:37.291 10:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:37.291 10:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:37.291 10:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:37.291 10:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:37.291 10:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:37.291 10:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:37.291 10:03:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.291 10:03:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.291 10:03:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.291 10:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:37.291 10:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:37.549 00:22:37.549 10:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:37.549 10:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:37.549 10:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:37.808 10:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.808 10:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:37.808 10:03:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.808 10:03:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.808 10:03:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.808 10:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:37.808 { 00:22:37.808 "auth": { 00:22:37.808 "dhgroup": "ffdhe4096", 00:22:37.808 "digest": "sha256", 00:22:37.808 "state": "completed" 00:22:37.808 }, 00:22:37.808 "cntlid": 25, 00:22:37.808 "listen_address": { 00:22:37.808 "adrfam": "IPv4", 00:22:37.808 "traddr": "10.0.0.2", 00:22:37.808 "trsvcid": "4420", 00:22:37.808 "trtype": "TCP" 00:22:37.808 }, 00:22:37.808 "peer_address": { 00:22:37.808 "adrfam": "IPv4", 00:22:37.808 "traddr": "10.0.0.1", 00:22:37.808 "trsvcid": "60270", 00:22:37.808 "trtype": "TCP" 00:22:37.808 }, 00:22:37.808 "qid": 0, 00:22:37.808 "state": "enabled", 00:22:37.808 "thread": "nvmf_tgt_poll_group_000" 00:22:37.808 } 00:22:37.808 ]' 00:22:37.808 10:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:37.808 10:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:37.808 10:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:37.808 10:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:37.808 10:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:38.068 10:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:38.068 10:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:38.068 10:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:38.068 10:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:00:NWQxMzZmNWQwNzExZTJlMmFlZjYyZTRlZjNhY2ZkOGQ1N2U4NDExZWJlYjNhZmFl2ifiLg==: --dhchap-ctrl-secret DHHC-1:03:NTM0YWI3NDk4YjFmMzczMjJkYjJkNjdmZmE1ZDU2OTU0ZjAzNjM0MzUyOGFhMDc4MmQxMjliYjEyYmM0MTFmMrepI4o=: 00:22:38.672 10:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:38.672 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:38.672 10:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:22:38.672 10:03:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.672 10:03:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.672 10:03:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.672 10:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:38.672 10:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:38.672 10:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:38.931 10:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:22:38.931 10:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:38.931 10:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:38.931 10:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:38.931 10:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:38.931 10:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:38.931 10:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:38.931 10:03:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.931 10:03:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.931 10:03:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.931 10:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:38.931 10:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:39.190 00:22:39.190 10:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:39.190 10:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:39.190 10:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:39.450 10:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.450 10:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:39.450 10:03:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.450 10:03:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.450 10:03:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.450 10:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:39.450 { 00:22:39.450 "auth": { 00:22:39.450 "dhgroup": "ffdhe4096", 00:22:39.450 "digest": "sha256", 00:22:39.450 "state": "completed" 00:22:39.450 }, 00:22:39.450 "cntlid": 27, 00:22:39.450 "listen_address": { 00:22:39.450 "adrfam": "IPv4", 00:22:39.450 "traddr": "10.0.0.2", 00:22:39.450 "trsvcid": "4420", 00:22:39.450 "trtype": "TCP" 00:22:39.450 }, 00:22:39.450 "peer_address": { 00:22:39.450 "adrfam": "IPv4", 00:22:39.450 "traddr": "10.0.0.1", 00:22:39.450 "trsvcid": "58008", 00:22:39.450 "trtype": "TCP" 00:22:39.450 }, 00:22:39.450 "qid": 0, 00:22:39.450 "state": "enabled", 00:22:39.450 "thread": "nvmf_tgt_poll_group_000" 00:22:39.450 } 00:22:39.450 ]' 00:22:39.450 10:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:39.450 10:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:39.450 10:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:39.450 10:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:39.450 10:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:39.709 10:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:39.709 10:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:39.709 10:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:39.709 10:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:01:YzNiODQ3NTllNGRkNjkzZWNkZGIyOTVjZTkwMGI1OTNQ924+: --dhchap-ctrl-secret DHHC-1:02:YjczYTQ1MGVmY2EyYWFkNTM0NzE5OGNjOTk2ZDJjZmYyMjYyODliZDNhNzYxZmUxbh2Sew==: 00:22:40.275 10:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:40.275 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:40.275 10:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:22:40.275 10:03:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.275 10:03:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.275 10:03:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.275 10:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:40.275 10:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:40.275 10:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:40.534 10:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:22:40.534 10:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:40.534 10:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:40.534 10:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:40.534 10:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:40.534 10:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:40.534 10:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:40.534 10:03:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.534 10:03:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.534 10:03:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.534 10:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:40.534 10:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:41.145 00:22:41.145 10:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:41.145 10:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:41.145 10:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:41.145 10:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.145 10:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:41.145 10:03:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.145 10:03:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.405 10:03:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.405 10:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:41.405 { 00:22:41.405 "auth": { 00:22:41.405 "dhgroup": "ffdhe4096", 00:22:41.405 "digest": "sha256", 00:22:41.405 "state": "completed" 00:22:41.405 }, 00:22:41.405 "cntlid": 29, 00:22:41.405 "listen_address": { 00:22:41.405 "adrfam": "IPv4", 00:22:41.405 "traddr": "10.0.0.2", 00:22:41.405 "trsvcid": "4420", 00:22:41.405 "trtype": "TCP" 00:22:41.405 }, 00:22:41.405 "peer_address": { 00:22:41.405 "adrfam": "IPv4", 00:22:41.405 "traddr": "10.0.0.1", 00:22:41.405 "trsvcid": "58030", 00:22:41.405 "trtype": "TCP" 00:22:41.405 }, 00:22:41.405 "qid": 0, 00:22:41.405 "state": "enabled", 00:22:41.405 "thread": "nvmf_tgt_poll_group_000" 00:22:41.405 } 00:22:41.405 ]' 00:22:41.405 10:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:41.405 10:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:41.405 10:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:41.405 10:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:41.405 10:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:41.405 10:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:41.405 10:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:41.405 10:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:41.666 10:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:02:NjU0ZjNjOWFkMGViNDg5OTZjOGQxNGI0NjE3ZDIxZTVkYjZiMWI1ZTA1MDMyMzY4c2X49g==: --dhchap-ctrl-secret DHHC-1:01:OWI2YTRmODIwODQ4MGIxNTQzYjY0YTdkMWJlYzU0NzEP+U4Y: 00:22:42.231 10:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:42.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:42.231 10:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:22:42.231 10:03:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.231 10:03:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.231 10:03:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.231 10:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:42.232 10:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:42.232 10:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:42.512 10:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:22:42.512 10:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:42.512 10:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:42.512 10:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:42.512 10:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:42.512 10:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:42.512 10:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key3 00:22:42.512 10:03:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.512 10:03:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.512 10:03:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.512 10:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:42.512 10:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:42.770 00:22:42.770 10:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:42.770 10:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:42.770 10:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:43.028 10:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.028 10:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:43.028 10:03:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.028 10:03:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.028 10:03:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.028 10:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:43.028 { 00:22:43.028 "auth": { 00:22:43.028 "dhgroup": "ffdhe4096", 00:22:43.028 "digest": "sha256", 00:22:43.028 "state": "completed" 00:22:43.028 }, 00:22:43.028 "cntlid": 31, 00:22:43.028 "listen_address": { 00:22:43.028 "adrfam": "IPv4", 00:22:43.028 "traddr": "10.0.0.2", 00:22:43.028 "trsvcid": "4420", 00:22:43.028 "trtype": "TCP" 00:22:43.028 }, 00:22:43.028 "peer_address": { 00:22:43.028 "adrfam": "IPv4", 00:22:43.028 "traddr": "10.0.0.1", 00:22:43.028 "trsvcid": "58056", 00:22:43.028 "trtype": "TCP" 00:22:43.028 }, 00:22:43.028 "qid": 0, 00:22:43.028 "state": "enabled", 00:22:43.028 "thread": "nvmf_tgt_poll_group_000" 00:22:43.028 } 00:22:43.028 ]' 00:22:43.028 10:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:43.028 10:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:43.287 10:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:43.287 10:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:43.287 10:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:43.287 10:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:43.287 10:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:43.287 10:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:43.545 10:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:03:YzJiYjU3YjhkZjhmNTcwYjY2OGFkYmY4ZDc5MTFjOWZmMDUwZTM5OTA3MmQxZGE5Nzg2M2IzMjAxNDNhYjM4Mgzy6hM=: 00:22:44.113 10:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:44.113 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:44.113 10:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:22:44.113 10:03:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.113 10:03:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.113 10:03:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.113 10:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:44.113 10:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:44.113 10:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:44.113 10:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:44.372 10:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:22:44.372 10:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:44.372 10:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:44.372 10:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:44.372 10:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:44.372 10:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:44.372 10:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:44.372 10:03:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.372 10:03:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.372 10:03:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.372 10:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:44.372 10:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:44.630 00:22:44.630 10:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:44.630 10:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:44.630 10:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:44.888 10:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:44.888 10:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:44.888 10:03:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.888 10:03:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.888 10:03:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.888 10:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:44.888 { 00:22:44.888 "auth": { 00:22:44.888 "dhgroup": "ffdhe6144", 00:22:44.888 "digest": "sha256", 00:22:44.888 "state": "completed" 00:22:44.888 }, 00:22:44.888 "cntlid": 33, 00:22:44.888 "listen_address": { 00:22:44.888 "adrfam": "IPv4", 00:22:44.888 "traddr": "10.0.0.2", 00:22:44.888 "trsvcid": "4420", 00:22:44.888 "trtype": "TCP" 00:22:44.888 }, 00:22:44.888 "peer_address": { 00:22:44.888 "adrfam": "IPv4", 00:22:44.888 "traddr": "10.0.0.1", 00:22:44.888 "trsvcid": "58074", 00:22:44.888 "trtype": "TCP" 00:22:44.888 }, 00:22:44.888 "qid": 0, 00:22:44.888 "state": "enabled", 00:22:44.888 "thread": "nvmf_tgt_poll_group_000" 00:22:44.888 } 00:22:44.888 ]' 00:22:44.888 10:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:44.888 10:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:44.888 10:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:45.147 10:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:45.147 10:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:45.147 10:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:45.147 10:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:45.147 10:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:45.405 10:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:00:NWQxMzZmNWQwNzExZTJlMmFlZjYyZTRlZjNhY2ZkOGQ1N2U4NDExZWJlYjNhZmFl2ifiLg==: --dhchap-ctrl-secret DHHC-1:03:NTM0YWI3NDk4YjFmMzczMjJkYjJkNjdmZmE1ZDU2OTU0ZjAzNjM0MzUyOGFhMDc4MmQxMjliYjEyYmM0MTFmMrepI4o=: 00:22:45.987 10:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:45.987 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:45.987 10:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:22:45.987 10:03:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.987 10:03:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.987 10:03:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.987 10:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:45.987 10:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:45.987 10:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:46.247 10:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:22:46.247 10:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:46.247 10:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:46.247 10:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:46.247 10:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:46.247 10:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:46.247 10:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:46.247 10:03:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.247 10:03:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.247 10:03:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.247 10:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:46.247 10:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:46.506 00:22:46.506 10:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:46.506 10:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:46.506 10:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:46.764 10:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.764 10:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:46.764 10:04:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.764 10:04:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.764 10:04:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.764 10:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:46.764 { 00:22:46.764 "auth": { 00:22:46.764 "dhgroup": "ffdhe6144", 00:22:46.764 "digest": "sha256", 00:22:46.764 "state": "completed" 00:22:46.764 }, 00:22:46.764 "cntlid": 35, 00:22:46.764 "listen_address": { 00:22:46.764 "adrfam": "IPv4", 00:22:46.764 "traddr": "10.0.0.2", 00:22:46.764 "trsvcid": "4420", 00:22:46.764 "trtype": "TCP" 00:22:46.764 }, 00:22:46.764 "peer_address": { 00:22:46.764 "adrfam": "IPv4", 00:22:46.764 "traddr": "10.0.0.1", 00:22:46.764 "trsvcid": "58094", 00:22:46.764 "trtype": "TCP" 00:22:46.764 }, 00:22:46.764 "qid": 0, 00:22:46.764 "state": "enabled", 00:22:46.764 "thread": "nvmf_tgt_poll_group_000" 00:22:46.764 } 00:22:46.764 ]' 00:22:46.764 10:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:46.764 10:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:46.764 10:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:47.024 10:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:47.024 10:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:47.024 10:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:47.024 10:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:47.024 10:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:47.283 10:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:01:YzNiODQ3NTllNGRkNjkzZWNkZGIyOTVjZTkwMGI1OTNQ924+: --dhchap-ctrl-secret DHHC-1:02:YjczYTQ1MGVmY2EyYWFkNTM0NzE5OGNjOTk2ZDJjZmYyMjYyODliZDNhNzYxZmUxbh2Sew==: 00:22:47.849 10:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:47.849 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:47.849 10:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:22:47.849 10:04:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.849 10:04:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.849 10:04:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.849 10:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:47.849 10:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:47.849 10:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:48.107 10:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:22:48.108 10:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:48.108 10:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:48.108 10:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:48.108 10:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:48.108 10:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:48.108 10:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:48.108 10:04:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.108 10:04:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.108 10:04:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.108 10:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:48.108 10:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:48.672 00:22:48.672 10:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:48.672 10:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:48.672 10:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:48.931 10:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.931 10:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:48.931 10:04:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.931 10:04:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.931 10:04:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.931 10:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:48.931 { 00:22:48.931 "auth": { 00:22:48.931 "dhgroup": "ffdhe6144", 00:22:48.931 "digest": "sha256", 00:22:48.931 "state": "completed" 00:22:48.931 }, 00:22:48.931 "cntlid": 37, 00:22:48.931 "listen_address": { 00:22:48.931 "adrfam": "IPv4", 00:22:48.931 "traddr": "10.0.0.2", 00:22:48.931 "trsvcid": "4420", 00:22:48.931 "trtype": "TCP" 00:22:48.931 }, 00:22:48.931 "peer_address": { 00:22:48.931 "adrfam": "IPv4", 00:22:48.931 "traddr": "10.0.0.1", 00:22:48.931 "trsvcid": "54152", 00:22:48.931 "trtype": "TCP" 00:22:48.931 }, 00:22:48.931 "qid": 0, 00:22:48.931 "state": "enabled", 00:22:48.931 "thread": "nvmf_tgt_poll_group_000" 00:22:48.931 } 00:22:48.931 ]' 00:22:48.931 10:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:48.931 10:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:48.931 10:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:48.931 10:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:48.931 10:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:48.931 10:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:48.931 10:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:48.931 10:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:49.190 10:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:02:NjU0ZjNjOWFkMGViNDg5OTZjOGQxNGI0NjE3ZDIxZTVkYjZiMWI1ZTA1MDMyMzY4c2X49g==: --dhchap-ctrl-secret DHHC-1:01:OWI2YTRmODIwODQ4MGIxNTQzYjY0YTdkMWJlYzU0NzEP+U4Y: 00:22:50.165 10:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:50.165 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:50.165 10:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:22:50.165 10:04:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.165 10:04:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.165 10:04:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.165 10:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:50.165 10:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:50.165 10:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:50.165 10:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:22:50.165 10:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:50.165 10:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:50.165 10:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:50.165 10:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:50.165 10:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:50.165 10:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key3 00:22:50.165 10:04:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.165 10:04:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.165 10:04:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.165 10:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:50.165 10:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:50.730 00:22:50.730 10:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:50.730 10:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:50.730 10:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:50.988 10:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:50.988 10:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:50.988 10:04:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.988 10:04:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.988 10:04:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.988 10:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:50.988 { 00:22:50.988 "auth": { 00:22:50.988 "dhgroup": "ffdhe6144", 00:22:50.988 "digest": "sha256", 00:22:50.988 "state": "completed" 00:22:50.988 }, 00:22:50.988 "cntlid": 39, 00:22:50.988 "listen_address": { 00:22:50.988 "adrfam": "IPv4", 00:22:50.988 "traddr": "10.0.0.2", 00:22:50.988 "trsvcid": "4420", 00:22:50.988 "trtype": "TCP" 00:22:50.988 }, 00:22:50.988 "peer_address": { 00:22:50.988 "adrfam": "IPv4", 00:22:50.988 "traddr": "10.0.0.1", 00:22:50.988 "trsvcid": "54166", 00:22:50.988 "trtype": "TCP" 00:22:50.988 }, 00:22:50.988 "qid": 0, 00:22:50.988 "state": "enabled", 00:22:50.988 "thread": "nvmf_tgt_poll_group_000" 00:22:50.988 } 00:22:50.988 ]' 00:22:50.988 10:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:50.988 10:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:50.988 10:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:50.988 10:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:50.988 10:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:50.988 10:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:50.988 10:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:50.989 10:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:51.246 10:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:03:YzJiYjU3YjhkZjhmNTcwYjY2OGFkYmY4ZDc5MTFjOWZmMDUwZTM5OTA3MmQxZGE5Nzg2M2IzMjAxNDNhYjM4Mgzy6hM=: 00:22:51.814 10:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:51.814 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:51.814 10:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:22:51.814 10:04:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.814 10:04:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.814 10:04:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.814 10:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:51.814 10:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:51.814 10:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:51.814 10:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:52.073 10:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:22:52.073 10:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:52.073 10:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:52.073 10:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:52.073 10:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:52.073 10:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:52.073 10:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:52.073 10:04:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.073 10:04:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.073 10:04:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.073 10:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:52.073 10:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:52.641 00:22:52.641 10:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:52.641 10:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:52.641 10:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:52.901 10:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:52.901 10:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:52.901 10:04:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.901 10:04:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.901 10:04:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.901 10:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:52.901 { 00:22:52.901 "auth": { 00:22:52.901 "dhgroup": "ffdhe8192", 00:22:52.901 "digest": "sha256", 00:22:52.901 "state": "completed" 00:22:52.901 }, 00:22:52.901 "cntlid": 41, 00:22:52.901 "listen_address": { 00:22:52.901 "adrfam": "IPv4", 00:22:52.901 "traddr": "10.0.0.2", 00:22:52.901 "trsvcid": "4420", 00:22:52.901 "trtype": "TCP" 00:22:52.901 }, 00:22:52.901 "peer_address": { 00:22:52.901 "adrfam": "IPv4", 00:22:52.901 "traddr": "10.0.0.1", 00:22:52.901 "trsvcid": "54196", 00:22:52.901 "trtype": "TCP" 00:22:52.901 }, 00:22:52.901 "qid": 0, 00:22:52.901 "state": "enabled", 00:22:52.901 "thread": "nvmf_tgt_poll_group_000" 00:22:52.901 } 00:22:52.901 ]' 00:22:52.901 10:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:52.901 10:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:52.901 10:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:53.160 10:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:53.160 10:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:53.160 10:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:53.160 10:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:53.160 10:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:53.420 10:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:00:NWQxMzZmNWQwNzExZTJlMmFlZjYyZTRlZjNhY2ZkOGQ1N2U4NDExZWJlYjNhZmFl2ifiLg==: --dhchap-ctrl-secret DHHC-1:03:NTM0YWI3NDk4YjFmMzczMjJkYjJkNjdmZmE1ZDU2OTU0ZjAzNjM0MzUyOGFhMDc4MmQxMjliYjEyYmM0MTFmMrepI4o=: 00:22:53.988 10:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:53.988 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:53.988 10:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:22:53.988 10:04:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.988 10:04:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.988 10:04:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.988 10:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:53.988 10:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:53.988 10:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:54.247 10:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:22:54.247 10:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:54.247 10:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:54.247 10:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:54.247 10:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:54.247 10:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:54.247 10:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:54.247 10:04:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.247 10:04:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.247 10:04:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.247 10:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:54.247 10:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:54.815 00:22:54.815 10:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:54.815 10:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:54.815 10:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:55.074 10:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:55.074 10:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:55.074 10:04:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.074 10:04:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.074 10:04:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.074 10:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:55.074 { 00:22:55.074 "auth": { 00:22:55.074 "dhgroup": "ffdhe8192", 00:22:55.074 "digest": "sha256", 00:22:55.074 "state": "completed" 00:22:55.074 }, 00:22:55.074 "cntlid": 43, 00:22:55.074 "listen_address": { 00:22:55.074 "adrfam": "IPv4", 00:22:55.074 "traddr": "10.0.0.2", 00:22:55.074 "trsvcid": "4420", 00:22:55.074 "trtype": "TCP" 00:22:55.074 }, 00:22:55.074 "peer_address": { 00:22:55.074 "adrfam": "IPv4", 00:22:55.074 "traddr": "10.0.0.1", 00:22:55.074 "trsvcid": "54232", 00:22:55.074 "trtype": "TCP" 00:22:55.074 }, 00:22:55.074 "qid": 0, 00:22:55.074 "state": "enabled", 00:22:55.074 "thread": "nvmf_tgt_poll_group_000" 00:22:55.074 } 00:22:55.074 ]' 00:22:55.074 10:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:55.074 10:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:55.074 10:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:55.074 10:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:55.074 10:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:55.074 10:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:55.074 10:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:55.074 10:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:55.332 10:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:01:YzNiODQ3NTllNGRkNjkzZWNkZGIyOTVjZTkwMGI1OTNQ924+: --dhchap-ctrl-secret DHHC-1:02:YjczYTQ1MGVmY2EyYWFkNTM0NzE5OGNjOTk2ZDJjZmYyMjYyODliZDNhNzYxZmUxbh2Sew==: 00:22:55.902 10:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:55.902 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:55.902 10:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:22:55.902 10:04:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.902 10:04:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.902 10:04:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.902 10:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:55.902 10:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:55.902 10:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:56.161 10:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:22:56.161 10:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:56.161 10:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:56.161 10:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:56.161 10:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:56.161 10:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:56.161 10:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:56.161 10:04:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.161 10:04:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.161 10:04:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.161 10:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:56.161 10:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:56.729 00:22:56.729 10:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:56.729 10:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:56.729 10:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:56.988 10:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:56.988 10:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:56.988 10:04:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.988 10:04:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.988 10:04:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.988 10:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:56.988 { 00:22:56.988 "auth": { 00:22:56.988 "dhgroup": "ffdhe8192", 00:22:56.988 "digest": "sha256", 00:22:56.988 "state": "completed" 00:22:56.988 }, 00:22:56.988 "cntlid": 45, 00:22:56.988 "listen_address": { 00:22:56.988 "adrfam": "IPv4", 00:22:56.988 "traddr": "10.0.0.2", 00:22:56.988 "trsvcid": "4420", 00:22:56.988 "trtype": "TCP" 00:22:56.988 }, 00:22:56.988 "peer_address": { 00:22:56.988 "adrfam": "IPv4", 00:22:56.988 "traddr": "10.0.0.1", 00:22:56.988 "trsvcid": "54244", 00:22:56.988 "trtype": "TCP" 00:22:56.988 }, 00:22:56.988 "qid": 0, 00:22:56.988 "state": "enabled", 00:22:56.988 "thread": "nvmf_tgt_poll_group_000" 00:22:56.988 } 00:22:56.988 ]' 00:22:56.988 10:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:57.248 10:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:57.248 10:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:57.248 10:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:57.248 10:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:57.248 10:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:57.248 10:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:57.248 10:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:57.532 10:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:02:NjU0ZjNjOWFkMGViNDg5OTZjOGQxNGI0NjE3ZDIxZTVkYjZiMWI1ZTA1MDMyMzY4c2X49g==: --dhchap-ctrl-secret DHHC-1:01:OWI2YTRmODIwODQ4MGIxNTQzYjY0YTdkMWJlYzU0NzEP+U4Y: 00:22:58.111 10:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:58.111 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:58.111 10:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:22:58.111 10:04:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.111 10:04:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.111 10:04:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.111 10:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:58.111 10:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:58.111 10:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:58.370 10:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:22:58.371 10:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:58.371 10:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:58.371 10:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:58.371 10:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:58.371 10:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:58.371 10:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key3 00:22:58.371 10:04:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.371 10:04:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.371 10:04:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.371 10:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:58.371 10:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:58.941 00:22:58.941 10:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:58.941 10:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:58.941 10:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:59.200 10:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:59.200 10:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:59.200 10:04:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.200 10:04:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.200 10:04:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.200 10:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:59.200 { 00:22:59.200 "auth": { 00:22:59.200 "dhgroup": "ffdhe8192", 00:22:59.200 "digest": "sha256", 00:22:59.200 "state": "completed" 00:22:59.200 }, 00:22:59.200 "cntlid": 47, 00:22:59.200 "listen_address": { 00:22:59.200 "adrfam": "IPv4", 00:22:59.200 "traddr": "10.0.0.2", 00:22:59.200 "trsvcid": "4420", 00:22:59.200 "trtype": "TCP" 00:22:59.200 }, 00:22:59.200 "peer_address": { 00:22:59.200 "adrfam": "IPv4", 00:22:59.200 "traddr": "10.0.0.1", 00:22:59.200 "trsvcid": "41664", 00:22:59.200 "trtype": "TCP" 00:22:59.200 }, 00:22:59.200 "qid": 0, 00:22:59.200 "state": "enabled", 00:22:59.200 "thread": "nvmf_tgt_poll_group_000" 00:22:59.200 } 00:22:59.200 ]' 00:22:59.200 10:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:59.200 10:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:59.200 10:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:59.200 10:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:59.200 10:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:59.200 10:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:59.200 10:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:59.200 10:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:59.458 10:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:03:YzJiYjU3YjhkZjhmNTcwYjY2OGFkYmY4ZDc5MTFjOWZmMDUwZTM5OTA3MmQxZGE5Nzg2M2IzMjAxNDNhYjM4Mgzy6hM=: 00:23:00.025 10:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:00.025 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:00.025 10:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:23:00.025 10:04:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.025 10:04:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.025 10:04:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.025 10:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:23:00.025 10:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:00.025 10:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:00.025 10:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:00.025 10:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:00.284 10:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:23:00.284 10:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:00.284 10:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:00.284 10:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:23:00.284 10:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:00.284 10:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:00.284 10:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:00.284 10:04:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.284 10:04:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.284 10:04:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.284 10:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:00.284 10:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:00.542 00:23:00.542 10:04:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:00.542 10:04:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:00.542 10:04:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:00.800 10:04:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.800 10:04:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:00.800 10:04:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.800 10:04:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.800 10:04:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.800 10:04:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:00.800 { 00:23:00.800 "auth": { 00:23:00.800 "dhgroup": "null", 00:23:00.800 "digest": "sha384", 00:23:00.800 "state": "completed" 00:23:00.800 }, 00:23:00.800 "cntlid": 49, 00:23:00.800 "listen_address": { 00:23:00.800 "adrfam": "IPv4", 00:23:00.800 "traddr": "10.0.0.2", 00:23:00.800 "trsvcid": "4420", 00:23:00.800 "trtype": "TCP" 00:23:00.800 }, 00:23:00.800 "peer_address": { 00:23:00.800 "adrfam": "IPv4", 00:23:00.800 "traddr": "10.0.0.1", 00:23:00.800 "trsvcid": "41690", 00:23:00.800 "trtype": "TCP" 00:23:00.800 }, 00:23:00.800 "qid": 0, 00:23:00.800 "state": "enabled", 00:23:00.800 "thread": "nvmf_tgt_poll_group_000" 00:23:00.800 } 00:23:00.800 ]' 00:23:00.800 10:04:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:00.801 10:04:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:00.801 10:04:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:01.061 10:04:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:23:01.061 10:04:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:01.061 10:04:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:01.061 10:04:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:01.061 10:04:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:01.320 10:04:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:00:NWQxMzZmNWQwNzExZTJlMmFlZjYyZTRlZjNhY2ZkOGQ1N2U4NDExZWJlYjNhZmFl2ifiLg==: --dhchap-ctrl-secret DHHC-1:03:NTM0YWI3NDk4YjFmMzczMjJkYjJkNjdmZmE1ZDU2OTU0ZjAzNjM0MzUyOGFhMDc4MmQxMjliYjEyYmM0MTFmMrepI4o=: 00:23:01.890 10:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:01.890 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:01.890 10:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:23:01.890 10:04:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.890 10:04:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.890 10:04:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.890 10:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:01.890 10:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:01.890 10:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:02.148 10:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:23:02.149 10:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:02.149 10:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:02.149 10:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:23:02.149 10:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:02.149 10:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:02.149 10:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:02.149 10:04:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.149 10:04:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.149 10:04:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.149 10:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:02.149 10:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:02.408 00:23:02.408 10:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:02.408 10:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:02.408 10:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:02.668 10:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.668 10:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:02.668 10:04:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.668 10:04:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.668 10:04:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.668 10:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:02.668 { 00:23:02.668 "auth": { 00:23:02.668 "dhgroup": "null", 00:23:02.668 "digest": "sha384", 00:23:02.668 "state": "completed" 00:23:02.668 }, 00:23:02.668 "cntlid": 51, 00:23:02.668 "listen_address": { 00:23:02.668 "adrfam": "IPv4", 00:23:02.668 "traddr": "10.0.0.2", 00:23:02.668 "trsvcid": "4420", 00:23:02.668 "trtype": "TCP" 00:23:02.668 }, 00:23:02.668 "peer_address": { 00:23:02.668 "adrfam": "IPv4", 00:23:02.668 "traddr": "10.0.0.1", 00:23:02.668 "trsvcid": "41724", 00:23:02.668 "trtype": "TCP" 00:23:02.668 }, 00:23:02.668 "qid": 0, 00:23:02.668 "state": "enabled", 00:23:02.668 "thread": "nvmf_tgt_poll_group_000" 00:23:02.668 } 00:23:02.668 ]' 00:23:02.668 10:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:02.668 10:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:02.668 10:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:02.668 10:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:23:02.668 10:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:02.668 10:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:02.668 10:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:02.668 10:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:02.928 10:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:01:YzNiODQ3NTllNGRkNjkzZWNkZGIyOTVjZTkwMGI1OTNQ924+: --dhchap-ctrl-secret DHHC-1:02:YjczYTQ1MGVmY2EyYWFkNTM0NzE5OGNjOTk2ZDJjZmYyMjYyODliZDNhNzYxZmUxbh2Sew==: 00:23:03.866 10:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:03.866 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:03.866 10:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:23:03.866 10:04:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.866 10:04:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.866 10:04:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.866 10:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:03.867 10:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:03.867 10:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:03.867 10:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:23:03.867 10:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:03.867 10:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:03.867 10:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:23:03.867 10:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:03.867 10:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:03.867 10:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:03.867 10:04:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.867 10:04:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.867 10:04:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.867 10:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:03.867 10:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:04.125 00:23:04.125 10:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:04.125 10:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:04.125 10:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:04.384 10:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.384 10:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:04.384 10:04:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.384 10:04:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.384 10:04:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.384 10:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:04.384 { 00:23:04.384 "auth": { 00:23:04.384 "dhgroup": "null", 00:23:04.385 "digest": "sha384", 00:23:04.385 "state": "completed" 00:23:04.385 }, 00:23:04.385 "cntlid": 53, 00:23:04.385 "listen_address": { 00:23:04.385 "adrfam": "IPv4", 00:23:04.385 "traddr": "10.0.0.2", 00:23:04.385 "trsvcid": "4420", 00:23:04.385 "trtype": "TCP" 00:23:04.385 }, 00:23:04.385 "peer_address": { 00:23:04.385 "adrfam": "IPv4", 00:23:04.385 "traddr": "10.0.0.1", 00:23:04.385 "trsvcid": "41750", 00:23:04.385 "trtype": "TCP" 00:23:04.385 }, 00:23:04.385 "qid": 0, 00:23:04.385 "state": "enabled", 00:23:04.385 "thread": "nvmf_tgt_poll_group_000" 00:23:04.385 } 00:23:04.385 ]' 00:23:04.385 10:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:04.385 10:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:04.385 10:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:04.644 10:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:23:04.645 10:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:04.645 10:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:04.645 10:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:04.645 10:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:04.904 10:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:02:NjU0ZjNjOWFkMGViNDg5OTZjOGQxNGI0NjE3ZDIxZTVkYjZiMWI1ZTA1MDMyMzY4c2X49g==: --dhchap-ctrl-secret DHHC-1:01:OWI2YTRmODIwODQ4MGIxNTQzYjY0YTdkMWJlYzU0NzEP+U4Y: 00:23:05.473 10:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:05.473 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:05.473 10:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:23:05.473 10:04:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.473 10:04:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.473 10:04:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.473 10:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:05.473 10:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:05.473 10:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:05.732 10:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:23:05.732 10:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:05.732 10:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:05.732 10:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:23:05.732 10:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:05.732 10:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:05.732 10:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key3 00:23:05.732 10:04:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.732 10:04:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.732 10:04:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.732 10:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:05.732 10:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:05.991 00:23:05.991 10:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:05.991 10:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:05.991 10:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:06.250 10:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.250 10:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:06.250 10:04:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.250 10:04:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.250 10:04:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.250 10:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:06.250 { 00:23:06.250 "auth": { 00:23:06.250 "dhgroup": "null", 00:23:06.250 "digest": "sha384", 00:23:06.250 "state": "completed" 00:23:06.250 }, 00:23:06.250 "cntlid": 55, 00:23:06.250 "listen_address": { 00:23:06.250 "adrfam": "IPv4", 00:23:06.250 "traddr": "10.0.0.2", 00:23:06.250 "trsvcid": "4420", 00:23:06.250 "trtype": "TCP" 00:23:06.250 }, 00:23:06.250 "peer_address": { 00:23:06.250 "adrfam": "IPv4", 00:23:06.250 "traddr": "10.0.0.1", 00:23:06.250 "trsvcid": "41766", 00:23:06.250 "trtype": "TCP" 00:23:06.250 }, 00:23:06.250 "qid": 0, 00:23:06.250 "state": "enabled", 00:23:06.250 "thread": "nvmf_tgt_poll_group_000" 00:23:06.250 } 00:23:06.250 ]' 00:23:06.250 10:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:06.250 10:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:06.250 10:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:06.250 10:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:23:06.250 10:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:06.250 10:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:06.250 10:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:06.250 10:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:06.509 10:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:03:YzJiYjU3YjhkZjhmNTcwYjY2OGFkYmY4ZDc5MTFjOWZmMDUwZTM5OTA3MmQxZGE5Nzg2M2IzMjAxNDNhYjM4Mgzy6hM=: 00:23:07.079 10:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:07.079 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:07.079 10:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:23:07.079 10:04:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.079 10:04:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.079 10:04:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.079 10:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:07.079 10:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:07.079 10:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:07.079 10:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:07.339 10:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:23:07.339 10:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:07.339 10:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:07.339 10:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:23:07.339 10:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:07.339 10:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:07.339 10:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:07.339 10:04:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.339 10:04:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.339 10:04:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.339 10:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:07.339 10:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:07.599 00:23:07.599 10:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:07.599 10:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:07.599 10:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:07.860 10:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.860 10:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:07.860 10:04:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.860 10:04:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.860 10:04:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.860 10:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:07.860 { 00:23:07.860 "auth": { 00:23:07.860 "dhgroup": "ffdhe2048", 00:23:07.860 "digest": "sha384", 00:23:07.860 "state": "completed" 00:23:07.860 }, 00:23:07.860 "cntlid": 57, 00:23:07.860 "listen_address": { 00:23:07.860 "adrfam": "IPv4", 00:23:07.860 "traddr": "10.0.0.2", 00:23:07.860 "trsvcid": "4420", 00:23:07.860 "trtype": "TCP" 00:23:07.860 }, 00:23:07.860 "peer_address": { 00:23:07.860 "adrfam": "IPv4", 00:23:07.860 "traddr": "10.0.0.1", 00:23:07.860 "trsvcid": "57626", 00:23:07.860 "trtype": "TCP" 00:23:07.860 }, 00:23:07.860 "qid": 0, 00:23:07.860 "state": "enabled", 00:23:07.860 "thread": "nvmf_tgt_poll_group_000" 00:23:07.860 } 00:23:07.860 ]' 00:23:07.860 10:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:07.860 10:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:07.860 10:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:08.119 10:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:08.119 10:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:08.119 10:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:08.119 10:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:08.119 10:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:08.379 10:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:00:NWQxMzZmNWQwNzExZTJlMmFlZjYyZTRlZjNhY2ZkOGQ1N2U4NDExZWJlYjNhZmFl2ifiLg==: --dhchap-ctrl-secret DHHC-1:03:NTM0YWI3NDk4YjFmMzczMjJkYjJkNjdmZmE1ZDU2OTU0ZjAzNjM0MzUyOGFhMDc4MmQxMjliYjEyYmM0MTFmMrepI4o=: 00:23:08.948 10:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:08.948 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:08.948 10:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:23:08.948 10:04:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.948 10:04:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.948 10:04:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.948 10:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:08.948 10:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:08.948 10:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:09.208 10:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:23:09.208 10:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:09.208 10:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:09.208 10:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:23:09.208 10:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:09.208 10:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:09.208 10:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:09.208 10:04:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.208 10:04:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.208 10:04:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.208 10:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:09.208 10:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:09.468 00:23:09.468 10:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:09.468 10:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:09.468 10:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:09.727 10:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.727 10:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:09.727 10:04:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.727 10:04:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.727 10:04:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.727 10:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:09.727 { 00:23:09.727 "auth": { 00:23:09.727 "dhgroup": "ffdhe2048", 00:23:09.727 "digest": "sha384", 00:23:09.727 "state": "completed" 00:23:09.727 }, 00:23:09.727 "cntlid": 59, 00:23:09.727 "listen_address": { 00:23:09.727 "adrfam": "IPv4", 00:23:09.727 "traddr": "10.0.0.2", 00:23:09.727 "trsvcid": "4420", 00:23:09.727 "trtype": "TCP" 00:23:09.727 }, 00:23:09.727 "peer_address": { 00:23:09.727 "adrfam": "IPv4", 00:23:09.727 "traddr": "10.0.0.1", 00:23:09.727 "trsvcid": "57662", 00:23:09.727 "trtype": "TCP" 00:23:09.727 }, 00:23:09.727 "qid": 0, 00:23:09.727 "state": "enabled", 00:23:09.727 "thread": "nvmf_tgt_poll_group_000" 00:23:09.727 } 00:23:09.727 ]' 00:23:09.727 10:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:09.727 10:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:09.727 10:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:09.727 10:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:09.727 10:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:09.727 10:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:09.727 10:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:09.727 10:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:09.986 10:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:01:YzNiODQ3NTllNGRkNjkzZWNkZGIyOTVjZTkwMGI1OTNQ924+: --dhchap-ctrl-secret DHHC-1:02:YjczYTQ1MGVmY2EyYWFkNTM0NzE5OGNjOTk2ZDJjZmYyMjYyODliZDNhNzYxZmUxbh2Sew==: 00:23:10.554 10:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:10.554 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:10.554 10:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:23:10.554 10:04:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.554 10:04:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.554 10:04:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.554 10:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:10.554 10:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:10.554 10:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:10.812 10:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:23:10.812 10:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:10.812 10:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:10.812 10:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:23:10.812 10:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:10.812 10:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:10.812 10:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:10.812 10:04:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.812 10:04:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.812 10:04:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.812 10:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:10.812 10:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:11.071 00:23:11.071 10:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:11.071 10:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:11.071 10:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:11.330 10:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.330 10:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:11.330 10:04:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.330 10:04:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.330 10:04:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.330 10:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:11.330 { 00:23:11.330 "auth": { 00:23:11.330 "dhgroup": "ffdhe2048", 00:23:11.330 "digest": "sha384", 00:23:11.330 "state": "completed" 00:23:11.330 }, 00:23:11.330 "cntlid": 61, 00:23:11.330 "listen_address": { 00:23:11.330 "adrfam": "IPv4", 00:23:11.330 "traddr": "10.0.0.2", 00:23:11.330 "trsvcid": "4420", 00:23:11.330 "trtype": "TCP" 00:23:11.330 }, 00:23:11.330 "peer_address": { 00:23:11.330 "adrfam": "IPv4", 00:23:11.330 "traddr": "10.0.0.1", 00:23:11.330 "trsvcid": "57696", 00:23:11.330 "trtype": "TCP" 00:23:11.330 }, 00:23:11.330 "qid": 0, 00:23:11.330 "state": "enabled", 00:23:11.330 "thread": "nvmf_tgt_poll_group_000" 00:23:11.330 } 00:23:11.330 ]' 00:23:11.330 10:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:11.330 10:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:11.330 10:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:11.589 10:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:11.589 10:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:11.589 10:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:11.589 10:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:11.589 10:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:11.848 10:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:02:NjU0ZjNjOWFkMGViNDg5OTZjOGQxNGI0NjE3ZDIxZTVkYjZiMWI1ZTA1MDMyMzY4c2X49g==: --dhchap-ctrl-secret DHHC-1:01:OWI2YTRmODIwODQ4MGIxNTQzYjY0YTdkMWJlYzU0NzEP+U4Y: 00:23:12.416 10:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:12.416 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:12.416 10:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:23:12.416 10:04:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.416 10:04:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.416 10:04:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.416 10:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:12.416 10:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:12.416 10:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:12.676 10:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:23:12.676 10:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:12.676 10:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:12.676 10:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:23:12.676 10:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:12.676 10:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:12.676 10:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key3 00:23:12.676 10:04:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.676 10:04:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.676 10:04:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.676 10:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:12.676 10:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:12.936 00:23:12.936 10:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:12.936 10:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:12.936 10:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:13.194 10:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.194 10:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:13.194 10:04:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.194 10:04:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.194 10:04:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.194 10:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:13.194 { 00:23:13.194 "auth": { 00:23:13.194 "dhgroup": "ffdhe2048", 00:23:13.194 "digest": "sha384", 00:23:13.194 "state": "completed" 00:23:13.194 }, 00:23:13.194 "cntlid": 63, 00:23:13.194 "listen_address": { 00:23:13.194 "adrfam": "IPv4", 00:23:13.194 "traddr": "10.0.0.2", 00:23:13.194 "trsvcid": "4420", 00:23:13.194 "trtype": "TCP" 00:23:13.194 }, 00:23:13.194 "peer_address": { 00:23:13.194 "adrfam": "IPv4", 00:23:13.194 "traddr": "10.0.0.1", 00:23:13.194 "trsvcid": "57732", 00:23:13.194 "trtype": "TCP" 00:23:13.194 }, 00:23:13.194 "qid": 0, 00:23:13.194 "state": "enabled", 00:23:13.194 "thread": "nvmf_tgt_poll_group_000" 00:23:13.194 } 00:23:13.194 ]' 00:23:13.194 10:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:13.194 10:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:13.194 10:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:13.194 10:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:13.194 10:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:13.194 10:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:13.194 10:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:13.194 10:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:13.469 10:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:03:YzJiYjU3YjhkZjhmNTcwYjY2OGFkYmY4ZDc5MTFjOWZmMDUwZTM5OTA3MmQxZGE5Nzg2M2IzMjAxNDNhYjM4Mgzy6hM=: 00:23:14.065 10:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:14.065 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:14.065 10:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:23:14.065 10:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.065 10:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.065 10:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.065 10:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:14.065 10:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:14.065 10:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:14.065 10:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:14.324 10:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:23:14.324 10:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:14.324 10:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:14.324 10:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:23:14.324 10:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:14.324 10:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:14.324 10:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:14.324 10:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.324 10:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.324 10:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.324 10:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:14.324 10:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:14.584 00:23:14.843 10:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:14.844 10:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:14.844 10:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:14.844 10:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.844 10:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:14.844 10:04:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.844 10:04:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.844 10:04:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.844 10:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:14.844 { 00:23:14.844 "auth": { 00:23:14.844 "dhgroup": "ffdhe3072", 00:23:14.844 "digest": "sha384", 00:23:14.844 "state": "completed" 00:23:14.844 }, 00:23:14.844 "cntlid": 65, 00:23:14.844 "listen_address": { 00:23:14.844 "adrfam": "IPv4", 00:23:14.844 "traddr": "10.0.0.2", 00:23:14.844 "trsvcid": "4420", 00:23:14.844 "trtype": "TCP" 00:23:14.844 }, 00:23:14.844 "peer_address": { 00:23:14.844 "adrfam": "IPv4", 00:23:14.844 "traddr": "10.0.0.1", 00:23:14.844 "trsvcid": "57770", 00:23:14.844 "trtype": "TCP" 00:23:14.844 }, 00:23:14.844 "qid": 0, 00:23:14.844 "state": "enabled", 00:23:14.844 "thread": "nvmf_tgt_poll_group_000" 00:23:14.844 } 00:23:14.844 ]' 00:23:14.844 10:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:15.103 10:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:15.103 10:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:15.103 10:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:15.103 10:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:15.103 10:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:15.103 10:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:15.103 10:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:15.363 10:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:00:NWQxMzZmNWQwNzExZTJlMmFlZjYyZTRlZjNhY2ZkOGQ1N2U4NDExZWJlYjNhZmFl2ifiLg==: --dhchap-ctrl-secret DHHC-1:03:NTM0YWI3NDk4YjFmMzczMjJkYjJkNjdmZmE1ZDU2OTU0ZjAzNjM0MzUyOGFhMDc4MmQxMjliYjEyYmM0MTFmMrepI4o=: 00:23:15.932 10:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:15.932 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:15.933 10:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:23:15.933 10:04:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.933 10:04:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.933 10:04:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.933 10:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:15.933 10:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:15.933 10:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:16.192 10:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:23:16.192 10:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:16.192 10:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:16.192 10:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:23:16.192 10:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:16.192 10:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:16.192 10:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:16.192 10:04:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.192 10:04:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.192 10:04:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.192 10:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:16.192 10:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:16.450 00:23:16.450 10:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:16.450 10:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:16.450 10:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:16.709 10:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.709 10:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:16.709 10:04:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.709 10:04:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.709 10:04:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.709 10:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:16.709 { 00:23:16.709 "auth": { 00:23:16.709 "dhgroup": "ffdhe3072", 00:23:16.709 "digest": "sha384", 00:23:16.709 "state": "completed" 00:23:16.709 }, 00:23:16.709 "cntlid": 67, 00:23:16.709 "listen_address": { 00:23:16.709 "adrfam": "IPv4", 00:23:16.709 "traddr": "10.0.0.2", 00:23:16.709 "trsvcid": "4420", 00:23:16.709 "trtype": "TCP" 00:23:16.709 }, 00:23:16.709 "peer_address": { 00:23:16.709 "adrfam": "IPv4", 00:23:16.709 "traddr": "10.0.0.1", 00:23:16.709 "trsvcid": "57808", 00:23:16.709 "trtype": "TCP" 00:23:16.709 }, 00:23:16.709 "qid": 0, 00:23:16.709 "state": "enabled", 00:23:16.709 "thread": "nvmf_tgt_poll_group_000" 00:23:16.709 } 00:23:16.709 ]' 00:23:16.709 10:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:16.709 10:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:16.709 10:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:16.709 10:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:16.709 10:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:16.709 10:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:16.709 10:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:16.966 10:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:16.966 10:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:01:YzNiODQ3NTllNGRkNjkzZWNkZGIyOTVjZTkwMGI1OTNQ924+: --dhchap-ctrl-secret DHHC-1:02:YjczYTQ1MGVmY2EyYWFkNTM0NzE5OGNjOTk2ZDJjZmYyMjYyODliZDNhNzYxZmUxbh2Sew==: 00:23:17.532 10:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:17.813 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:17.813 10:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:23:17.813 10:04:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.813 10:04:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.813 10:04:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.813 10:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:17.813 10:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:17.813 10:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:17.813 10:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:23:17.813 10:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:17.813 10:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:17.813 10:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:23:17.813 10:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:17.813 10:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:17.813 10:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:17.813 10:04:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.813 10:04:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.813 10:04:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.813 10:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:17.813 10:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:18.070 00:23:18.329 10:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:18.329 10:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:18.329 10:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:18.329 10:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.329 10:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:18.330 10:04:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.330 10:04:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.330 10:04:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.330 10:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:18.330 { 00:23:18.330 "auth": { 00:23:18.330 "dhgroup": "ffdhe3072", 00:23:18.330 "digest": "sha384", 00:23:18.330 "state": "completed" 00:23:18.330 }, 00:23:18.330 "cntlid": 69, 00:23:18.330 "listen_address": { 00:23:18.330 "adrfam": "IPv4", 00:23:18.330 "traddr": "10.0.0.2", 00:23:18.330 "trsvcid": "4420", 00:23:18.330 "trtype": "TCP" 00:23:18.330 }, 00:23:18.330 "peer_address": { 00:23:18.330 "adrfam": "IPv4", 00:23:18.330 "traddr": "10.0.0.1", 00:23:18.330 "trsvcid": "44178", 00:23:18.330 "trtype": "TCP" 00:23:18.330 }, 00:23:18.330 "qid": 0, 00:23:18.330 "state": "enabled", 00:23:18.330 "thread": "nvmf_tgt_poll_group_000" 00:23:18.330 } 00:23:18.330 ]' 00:23:18.330 10:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:18.589 10:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:18.589 10:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:18.589 10:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:18.589 10:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:18.589 10:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:18.589 10:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:18.589 10:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:18.846 10:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:02:NjU0ZjNjOWFkMGViNDg5OTZjOGQxNGI0NjE3ZDIxZTVkYjZiMWI1ZTA1MDMyMzY4c2X49g==: --dhchap-ctrl-secret DHHC-1:01:OWI2YTRmODIwODQ4MGIxNTQzYjY0YTdkMWJlYzU0NzEP+U4Y: 00:23:19.411 10:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:19.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:19.411 10:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:23:19.411 10:04:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.411 10:04:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.411 10:04:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.411 10:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:19.411 10:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:19.411 10:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:19.411 10:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:23:19.411 10:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:19.411 10:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:19.411 10:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:23:19.411 10:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:19.411 10:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:19.411 10:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key3 00:23:19.411 10:04:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.411 10:04:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.670 10:04:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.670 10:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:19.670 10:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:19.929 00:23:19.929 10:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:19.929 10:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:19.929 10:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:19.929 10:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.929 10:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:19.929 10:04:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.929 10:04:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.929 10:04:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.929 10:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:19.929 { 00:23:19.929 "auth": { 00:23:19.929 "dhgroup": "ffdhe3072", 00:23:19.929 "digest": "sha384", 00:23:19.929 "state": "completed" 00:23:19.929 }, 00:23:19.929 "cntlid": 71, 00:23:19.929 "listen_address": { 00:23:19.929 "adrfam": "IPv4", 00:23:19.929 "traddr": "10.0.0.2", 00:23:19.929 "trsvcid": "4420", 00:23:19.929 "trtype": "TCP" 00:23:19.929 }, 00:23:19.929 "peer_address": { 00:23:19.929 "adrfam": "IPv4", 00:23:19.929 "traddr": "10.0.0.1", 00:23:19.929 "trsvcid": "44198", 00:23:19.929 "trtype": "TCP" 00:23:19.929 }, 00:23:19.929 "qid": 0, 00:23:19.929 "state": "enabled", 00:23:19.929 "thread": "nvmf_tgt_poll_group_000" 00:23:19.929 } 00:23:19.929 ]' 00:23:20.186 10:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:20.186 10:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:20.187 10:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:20.187 10:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:20.187 10:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:20.187 10:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:20.187 10:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:20.187 10:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:20.446 10:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:03:YzJiYjU3YjhkZjhmNTcwYjY2OGFkYmY4ZDc5MTFjOWZmMDUwZTM5OTA3MmQxZGE5Nzg2M2IzMjAxNDNhYjM4Mgzy6hM=: 00:23:21.015 10:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:21.015 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:21.015 10:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:23:21.015 10:04:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.015 10:04:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:21.015 10:04:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.015 10:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:21.015 10:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:21.016 10:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:21.016 10:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:21.275 10:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:23:21.275 10:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:21.275 10:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:21.275 10:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:21.275 10:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:21.275 10:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:21.275 10:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:21.275 10:04:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.275 10:04:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:21.275 10:04:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.275 10:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:21.275 10:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:21.534 00:23:21.534 10:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:21.534 10:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:21.534 10:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:21.792 10:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.792 10:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:21.792 10:04:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.792 10:04:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:21.792 10:04:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.792 10:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:21.792 { 00:23:21.792 "auth": { 00:23:21.792 "dhgroup": "ffdhe4096", 00:23:21.792 "digest": "sha384", 00:23:21.792 "state": "completed" 00:23:21.792 }, 00:23:21.792 "cntlid": 73, 00:23:21.792 "listen_address": { 00:23:21.792 "adrfam": "IPv4", 00:23:21.792 "traddr": "10.0.0.2", 00:23:21.792 "trsvcid": "4420", 00:23:21.792 "trtype": "TCP" 00:23:21.792 }, 00:23:21.792 "peer_address": { 00:23:21.792 "adrfam": "IPv4", 00:23:21.792 "traddr": "10.0.0.1", 00:23:21.792 "trsvcid": "44216", 00:23:21.792 "trtype": "TCP" 00:23:21.792 }, 00:23:21.792 "qid": 0, 00:23:21.792 "state": "enabled", 00:23:21.792 "thread": "nvmf_tgt_poll_group_000" 00:23:21.792 } 00:23:21.792 ]' 00:23:21.792 10:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:21.792 10:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:21.792 10:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:21.792 10:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:21.792 10:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:21.792 10:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:21.792 10:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:21.792 10:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:22.050 10:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:00:NWQxMzZmNWQwNzExZTJlMmFlZjYyZTRlZjNhY2ZkOGQ1N2U4NDExZWJlYjNhZmFl2ifiLg==: --dhchap-ctrl-secret DHHC-1:03:NTM0YWI3NDk4YjFmMzczMjJkYjJkNjdmZmE1ZDU2OTU0ZjAzNjM0MzUyOGFhMDc4MmQxMjliYjEyYmM0MTFmMrepI4o=: 00:23:22.627 10:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:22.627 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:22.627 10:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:23:22.627 10:04:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.627 10:04:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.627 10:04:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.627 10:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:22.627 10:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:22.627 10:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:22.887 10:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:23:22.887 10:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:22.887 10:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:22.887 10:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:22.887 10:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:22.887 10:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:22.887 10:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:22.887 10:04:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.887 10:04:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.887 10:04:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.887 10:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:22.887 10:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:23.147 00:23:23.147 10:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:23.147 10:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:23.147 10:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:23.408 10:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.408 10:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:23.408 10:04:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.408 10:04:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:23.668 10:04:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.668 10:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:23.668 { 00:23:23.668 "auth": { 00:23:23.668 "dhgroup": "ffdhe4096", 00:23:23.668 "digest": "sha384", 00:23:23.668 "state": "completed" 00:23:23.668 }, 00:23:23.668 "cntlid": 75, 00:23:23.668 "listen_address": { 00:23:23.668 "adrfam": "IPv4", 00:23:23.668 "traddr": "10.0.0.2", 00:23:23.668 "trsvcid": "4420", 00:23:23.668 "trtype": "TCP" 00:23:23.668 }, 00:23:23.668 "peer_address": { 00:23:23.668 "adrfam": "IPv4", 00:23:23.668 "traddr": "10.0.0.1", 00:23:23.668 "trsvcid": "44252", 00:23:23.668 "trtype": "TCP" 00:23:23.668 }, 00:23:23.668 "qid": 0, 00:23:23.668 "state": "enabled", 00:23:23.668 "thread": "nvmf_tgt_poll_group_000" 00:23:23.668 } 00:23:23.668 ]' 00:23:23.668 10:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:23.668 10:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:23.668 10:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:23.668 10:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:23.668 10:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:23.668 10:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:23.668 10:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:23.668 10:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:23.927 10:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:01:YzNiODQ3NTllNGRkNjkzZWNkZGIyOTVjZTkwMGI1OTNQ924+: --dhchap-ctrl-secret DHHC-1:02:YjczYTQ1MGVmY2EyYWFkNTM0NzE5OGNjOTk2ZDJjZmYyMjYyODliZDNhNzYxZmUxbh2Sew==: 00:23:24.496 10:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:24.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:24.496 10:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:23:24.496 10:04:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.496 10:04:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.496 10:04:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.496 10:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:24.496 10:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:24.496 10:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:24.756 10:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:23:24.756 10:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:24.756 10:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:24.756 10:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:24.756 10:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:24.756 10:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:24.756 10:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:24.756 10:04:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.756 10:04:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.756 10:04:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.756 10:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:24.757 10:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:25.017 00:23:25.017 10:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:25.017 10:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:25.017 10:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:25.277 10:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.277 10:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:25.277 10:04:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.277 10:04:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.277 10:04:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.277 10:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:25.277 { 00:23:25.277 "auth": { 00:23:25.277 "dhgroup": "ffdhe4096", 00:23:25.277 "digest": "sha384", 00:23:25.277 "state": "completed" 00:23:25.277 }, 00:23:25.277 "cntlid": 77, 00:23:25.277 "listen_address": { 00:23:25.277 "adrfam": "IPv4", 00:23:25.277 "traddr": "10.0.0.2", 00:23:25.277 "trsvcid": "4420", 00:23:25.277 "trtype": "TCP" 00:23:25.277 }, 00:23:25.277 "peer_address": { 00:23:25.277 "adrfam": "IPv4", 00:23:25.277 "traddr": "10.0.0.1", 00:23:25.277 "trsvcid": "44292", 00:23:25.277 "trtype": "TCP" 00:23:25.277 }, 00:23:25.277 "qid": 0, 00:23:25.277 "state": "enabled", 00:23:25.277 "thread": "nvmf_tgt_poll_group_000" 00:23:25.277 } 00:23:25.277 ]' 00:23:25.277 10:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:25.277 10:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:25.277 10:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:25.277 10:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:25.277 10:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:25.277 10:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:25.277 10:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:25.277 10:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:25.535 10:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:02:NjU0ZjNjOWFkMGViNDg5OTZjOGQxNGI0NjE3ZDIxZTVkYjZiMWI1ZTA1MDMyMzY4c2X49g==: --dhchap-ctrl-secret DHHC-1:01:OWI2YTRmODIwODQ4MGIxNTQzYjY0YTdkMWJlYzU0NzEP+U4Y: 00:23:26.103 10:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:26.103 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:26.103 10:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:23:26.103 10:04:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.103 10:04:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:26.103 10:04:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.103 10:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:26.103 10:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:26.103 10:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:26.360 10:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:23:26.360 10:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:26.360 10:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:26.360 10:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:26.360 10:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:26.360 10:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:26.360 10:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key3 00:23:26.360 10:04:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.360 10:04:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:26.360 10:04:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.360 10:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:26.360 10:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:26.619 00:23:26.619 10:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:26.619 10:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:26.619 10:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:26.877 10:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.877 10:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:26.877 10:04:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.877 10:04:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:26.877 10:04:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.877 10:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:26.877 { 00:23:26.877 "auth": { 00:23:26.877 "dhgroup": "ffdhe4096", 00:23:26.877 "digest": "sha384", 00:23:26.877 "state": "completed" 00:23:26.877 }, 00:23:26.877 "cntlid": 79, 00:23:26.877 "listen_address": { 00:23:26.877 "adrfam": "IPv4", 00:23:26.877 "traddr": "10.0.0.2", 00:23:26.877 "trsvcid": "4420", 00:23:26.877 "trtype": "TCP" 00:23:26.877 }, 00:23:26.877 "peer_address": { 00:23:26.877 "adrfam": "IPv4", 00:23:26.877 "traddr": "10.0.0.1", 00:23:26.877 "trsvcid": "44306", 00:23:26.877 "trtype": "TCP" 00:23:26.877 }, 00:23:26.877 "qid": 0, 00:23:26.877 "state": "enabled", 00:23:26.877 "thread": "nvmf_tgt_poll_group_000" 00:23:26.877 } 00:23:26.877 ]' 00:23:26.877 10:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:26.877 10:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:26.877 10:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:27.137 10:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:27.137 10:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:27.137 10:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:27.137 10:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:27.137 10:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:27.395 10:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:03:YzJiYjU3YjhkZjhmNTcwYjY2OGFkYmY4ZDc5MTFjOWZmMDUwZTM5OTA3MmQxZGE5Nzg2M2IzMjAxNDNhYjM4Mgzy6hM=: 00:23:27.964 10:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:27.964 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:27.964 10:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:23:27.964 10:04:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.964 10:04:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.964 10:04:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.964 10:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:27.964 10:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:27.964 10:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:27.964 10:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:27.964 10:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:23:27.964 10:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:27.964 10:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:27.964 10:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:27.964 10:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:27.964 10:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:27.964 10:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:27.964 10:04:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.964 10:04:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.964 10:04:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.964 10:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:27.964 10:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:28.532 00:23:28.532 10:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:28.532 10:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:28.532 10:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:28.791 10:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.791 10:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:28.791 10:04:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.791 10:04:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:28.791 10:04:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.791 10:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:28.791 { 00:23:28.791 "auth": { 00:23:28.791 "dhgroup": "ffdhe6144", 00:23:28.791 "digest": "sha384", 00:23:28.791 "state": "completed" 00:23:28.791 }, 00:23:28.791 "cntlid": 81, 00:23:28.791 "listen_address": { 00:23:28.791 "adrfam": "IPv4", 00:23:28.791 "traddr": "10.0.0.2", 00:23:28.791 "trsvcid": "4420", 00:23:28.791 "trtype": "TCP" 00:23:28.791 }, 00:23:28.791 "peer_address": { 00:23:28.791 "adrfam": "IPv4", 00:23:28.791 "traddr": "10.0.0.1", 00:23:28.791 "trsvcid": "34088", 00:23:28.791 "trtype": "TCP" 00:23:28.791 }, 00:23:28.791 "qid": 0, 00:23:28.791 "state": "enabled", 00:23:28.791 "thread": "nvmf_tgt_poll_group_000" 00:23:28.791 } 00:23:28.791 ]' 00:23:28.791 10:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:28.791 10:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:28.791 10:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:28.791 10:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:28.791 10:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:28.791 10:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:28.791 10:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:28.791 10:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:29.050 10:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:00:NWQxMzZmNWQwNzExZTJlMmFlZjYyZTRlZjNhY2ZkOGQ1N2U4NDExZWJlYjNhZmFl2ifiLg==: --dhchap-ctrl-secret DHHC-1:03:NTM0YWI3NDk4YjFmMzczMjJkYjJkNjdmZmE1ZDU2OTU0ZjAzNjM0MzUyOGFhMDc4MmQxMjliYjEyYmM0MTFmMrepI4o=: 00:23:29.615 10:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:29.615 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:29.615 10:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:23:29.615 10:04:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.615 10:04:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:29.615 10:04:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.615 10:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:29.615 10:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:29.615 10:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:29.872 10:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:23:29.872 10:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:29.872 10:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:29.872 10:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:29.872 10:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:29.872 10:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:29.872 10:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:29.872 10:04:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.872 10:04:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:29.872 10:04:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.872 10:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:29.872 10:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:30.129 00:23:30.129 10:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:30.129 10:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:30.129 10:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:30.387 10:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.387 10:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:30.387 10:04:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.387 10:04:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:30.387 10:04:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.387 10:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:30.387 { 00:23:30.387 "auth": { 00:23:30.387 "dhgroup": "ffdhe6144", 00:23:30.387 "digest": "sha384", 00:23:30.387 "state": "completed" 00:23:30.387 }, 00:23:30.387 "cntlid": 83, 00:23:30.387 "listen_address": { 00:23:30.387 "adrfam": "IPv4", 00:23:30.387 "traddr": "10.0.0.2", 00:23:30.387 "trsvcid": "4420", 00:23:30.387 "trtype": "TCP" 00:23:30.387 }, 00:23:30.387 "peer_address": { 00:23:30.387 "adrfam": "IPv4", 00:23:30.387 "traddr": "10.0.0.1", 00:23:30.387 "trsvcid": "34110", 00:23:30.387 "trtype": "TCP" 00:23:30.387 }, 00:23:30.387 "qid": 0, 00:23:30.387 "state": "enabled", 00:23:30.387 "thread": "nvmf_tgt_poll_group_000" 00:23:30.387 } 00:23:30.387 ]' 00:23:30.387 10:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:30.646 10:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:30.646 10:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:30.646 10:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:30.646 10:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:30.646 10:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:30.646 10:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:30.646 10:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:30.904 10:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:01:YzNiODQ3NTllNGRkNjkzZWNkZGIyOTVjZTkwMGI1OTNQ924+: --dhchap-ctrl-secret DHHC-1:02:YjczYTQ1MGVmY2EyYWFkNTM0NzE5OGNjOTk2ZDJjZmYyMjYyODliZDNhNzYxZmUxbh2Sew==: 00:23:31.471 10:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:31.471 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:31.471 10:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:23:31.471 10:04:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.471 10:04:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:31.471 10:04:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.471 10:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:31.471 10:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:31.471 10:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:31.730 10:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:23:31.730 10:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:31.730 10:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:31.730 10:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:31.730 10:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:31.730 10:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:31.730 10:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:31.730 10:04:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.730 10:04:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:31.730 10:04:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.730 10:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:31.730 10:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:31.989 00:23:31.989 10:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:31.989 10:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:31.989 10:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:32.249 10:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.249 10:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:32.249 10:04:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.249 10:04:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:32.249 10:04:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.249 10:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:32.249 { 00:23:32.249 "auth": { 00:23:32.249 "dhgroup": "ffdhe6144", 00:23:32.249 "digest": "sha384", 00:23:32.249 "state": "completed" 00:23:32.249 }, 00:23:32.249 "cntlid": 85, 00:23:32.249 "listen_address": { 00:23:32.249 "adrfam": "IPv4", 00:23:32.249 "traddr": "10.0.0.2", 00:23:32.249 "trsvcid": "4420", 00:23:32.249 "trtype": "TCP" 00:23:32.249 }, 00:23:32.249 "peer_address": { 00:23:32.249 "adrfam": "IPv4", 00:23:32.249 "traddr": "10.0.0.1", 00:23:32.249 "trsvcid": "34148", 00:23:32.249 "trtype": "TCP" 00:23:32.249 }, 00:23:32.249 "qid": 0, 00:23:32.249 "state": "enabled", 00:23:32.249 "thread": "nvmf_tgt_poll_group_000" 00:23:32.249 } 00:23:32.249 ]' 00:23:32.249 10:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:32.249 10:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:32.249 10:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:32.509 10:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:32.509 10:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:32.509 10:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:32.509 10:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:32.509 10:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:32.769 10:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:02:NjU0ZjNjOWFkMGViNDg5OTZjOGQxNGI0NjE3ZDIxZTVkYjZiMWI1ZTA1MDMyMzY4c2X49g==: --dhchap-ctrl-secret DHHC-1:01:OWI2YTRmODIwODQ4MGIxNTQzYjY0YTdkMWJlYzU0NzEP+U4Y: 00:23:33.339 10:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:33.339 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:33.339 10:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:23:33.339 10:04:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.339 10:04:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.339 10:04:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.339 10:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:33.339 10:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:33.339 10:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:33.599 10:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:23:33.599 10:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:33.599 10:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:33.599 10:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:33.599 10:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:33.599 10:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:33.599 10:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key3 00:23:33.599 10:04:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.599 10:04:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.599 10:04:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.599 10:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:33.599 10:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:33.858 00:23:33.858 10:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:33.858 10:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:33.858 10:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:34.117 10:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.117 10:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:34.117 10:04:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.117 10:04:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:34.117 10:04:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.117 10:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:34.117 { 00:23:34.117 "auth": { 00:23:34.117 "dhgroup": "ffdhe6144", 00:23:34.117 "digest": "sha384", 00:23:34.117 "state": "completed" 00:23:34.117 }, 00:23:34.117 "cntlid": 87, 00:23:34.117 "listen_address": { 00:23:34.117 "adrfam": "IPv4", 00:23:34.117 "traddr": "10.0.0.2", 00:23:34.117 "trsvcid": "4420", 00:23:34.117 "trtype": "TCP" 00:23:34.117 }, 00:23:34.117 "peer_address": { 00:23:34.117 "adrfam": "IPv4", 00:23:34.117 "traddr": "10.0.0.1", 00:23:34.117 "trsvcid": "34176", 00:23:34.117 "trtype": "TCP" 00:23:34.117 }, 00:23:34.117 "qid": 0, 00:23:34.117 "state": "enabled", 00:23:34.117 "thread": "nvmf_tgt_poll_group_000" 00:23:34.117 } 00:23:34.117 ]' 00:23:34.118 10:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:34.118 10:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:34.118 10:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:34.118 10:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:34.118 10:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:34.377 10:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:34.377 10:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:34.377 10:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:34.377 10:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:03:YzJiYjU3YjhkZjhmNTcwYjY2OGFkYmY4ZDc5MTFjOWZmMDUwZTM5OTA3MmQxZGE5Nzg2M2IzMjAxNDNhYjM4Mgzy6hM=: 00:23:34.946 10:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:34.946 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:34.946 10:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:23:34.946 10:04:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.946 10:04:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:34.946 10:04:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.946 10:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:34.946 10:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:34.946 10:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:34.946 10:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:35.205 10:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:23:35.205 10:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:35.205 10:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:35.205 10:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:35.205 10:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:35.205 10:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:35.205 10:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:35.205 10:04:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.205 10:04:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.205 10:04:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.205 10:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:35.205 10:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:35.773 00:23:35.773 10:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:35.773 10:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:35.773 10:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:36.032 10:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.032 10:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:36.032 10:04:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.032 10:04:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:36.032 10:04:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.032 10:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:36.032 { 00:23:36.032 "auth": { 00:23:36.032 "dhgroup": "ffdhe8192", 00:23:36.032 "digest": "sha384", 00:23:36.032 "state": "completed" 00:23:36.032 }, 00:23:36.032 "cntlid": 89, 00:23:36.032 "listen_address": { 00:23:36.032 "adrfam": "IPv4", 00:23:36.032 "traddr": "10.0.0.2", 00:23:36.032 "trsvcid": "4420", 00:23:36.032 "trtype": "TCP" 00:23:36.032 }, 00:23:36.032 "peer_address": { 00:23:36.032 "adrfam": "IPv4", 00:23:36.032 "traddr": "10.0.0.1", 00:23:36.032 "trsvcid": "34198", 00:23:36.032 "trtype": "TCP" 00:23:36.032 }, 00:23:36.032 "qid": 0, 00:23:36.032 "state": "enabled", 00:23:36.032 "thread": "nvmf_tgt_poll_group_000" 00:23:36.032 } 00:23:36.032 ]' 00:23:36.032 10:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:36.032 10:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:36.032 10:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:36.032 10:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:36.032 10:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:36.292 10:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:36.292 10:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:36.292 10:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:36.292 10:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:00:NWQxMzZmNWQwNzExZTJlMmFlZjYyZTRlZjNhY2ZkOGQ1N2U4NDExZWJlYjNhZmFl2ifiLg==: --dhchap-ctrl-secret DHHC-1:03:NTM0YWI3NDk4YjFmMzczMjJkYjJkNjdmZmE1ZDU2OTU0ZjAzNjM0MzUyOGFhMDc4MmQxMjliYjEyYmM0MTFmMrepI4o=: 00:23:36.861 10:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:36.861 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:36.861 10:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:23:36.861 10:04:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.861 10:04:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:36.861 10:04:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.861 10:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:36.862 10:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:36.862 10:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:37.121 10:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:23:37.121 10:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:37.121 10:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:37.121 10:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:37.121 10:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:37.121 10:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:37.121 10:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:37.121 10:04:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.121 10:04:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:37.121 10:04:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.121 10:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:37.121 10:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:37.689 00:23:37.689 10:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:37.689 10:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:37.689 10:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:37.949 10:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.949 10:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:37.949 10:04:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.949 10:04:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:37.949 10:04:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.949 10:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:37.949 { 00:23:37.949 "auth": { 00:23:37.949 "dhgroup": "ffdhe8192", 00:23:37.949 "digest": "sha384", 00:23:37.949 "state": "completed" 00:23:37.949 }, 00:23:37.949 "cntlid": 91, 00:23:37.949 "listen_address": { 00:23:37.949 "adrfam": "IPv4", 00:23:37.949 "traddr": "10.0.0.2", 00:23:37.949 "trsvcid": "4420", 00:23:37.949 "trtype": "TCP" 00:23:37.949 }, 00:23:37.949 "peer_address": { 00:23:37.949 "adrfam": "IPv4", 00:23:37.949 "traddr": "10.0.0.1", 00:23:37.949 "trsvcid": "34214", 00:23:37.949 "trtype": "TCP" 00:23:37.949 }, 00:23:37.949 "qid": 0, 00:23:37.949 "state": "enabled", 00:23:37.949 "thread": "nvmf_tgt_poll_group_000" 00:23:37.949 } 00:23:37.949 ]' 00:23:37.949 10:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:37.949 10:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:37.949 10:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:37.949 10:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:37.949 10:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:38.209 10:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:38.209 10:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:38.209 10:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:38.468 10:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:01:YzNiODQ3NTllNGRkNjkzZWNkZGIyOTVjZTkwMGI1OTNQ924+: --dhchap-ctrl-secret DHHC-1:02:YjczYTQ1MGVmY2EyYWFkNTM0NzE5OGNjOTk2ZDJjZmYyMjYyODliZDNhNzYxZmUxbh2Sew==: 00:23:39.036 10:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:39.036 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:39.036 10:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:23:39.036 10:04:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.036 10:04:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:39.036 10:04:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.036 10:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:39.036 10:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:39.036 10:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:39.294 10:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:23:39.294 10:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:39.294 10:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:39.294 10:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:39.294 10:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:39.295 10:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:39.295 10:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:39.295 10:04:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.295 10:04:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:39.295 10:04:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.295 10:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:39.295 10:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:39.862 00:23:39.862 10:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:39.862 10:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:39.862 10:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:39.862 10:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.862 10:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:39.862 10:04:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.862 10:04:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:39.862 10:04:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.862 10:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:39.862 { 00:23:39.862 "auth": { 00:23:39.862 "dhgroup": "ffdhe8192", 00:23:39.862 "digest": "sha384", 00:23:39.862 "state": "completed" 00:23:39.862 }, 00:23:39.862 "cntlid": 93, 00:23:39.862 "listen_address": { 00:23:39.862 "adrfam": "IPv4", 00:23:39.862 "traddr": "10.0.0.2", 00:23:39.862 "trsvcid": "4420", 00:23:39.862 "trtype": "TCP" 00:23:39.862 }, 00:23:39.862 "peer_address": { 00:23:39.862 "adrfam": "IPv4", 00:23:39.862 "traddr": "10.0.0.1", 00:23:39.862 "trsvcid": "43568", 00:23:39.862 "trtype": "TCP" 00:23:39.862 }, 00:23:39.862 "qid": 0, 00:23:39.862 "state": "enabled", 00:23:39.862 "thread": "nvmf_tgt_poll_group_000" 00:23:39.862 } 00:23:39.862 ]' 00:23:39.862 10:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:40.119 10:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:40.119 10:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:40.119 10:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:40.119 10:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:40.119 10:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:40.119 10:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:40.119 10:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:40.377 10:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:02:NjU0ZjNjOWFkMGViNDg5OTZjOGQxNGI0NjE3ZDIxZTVkYjZiMWI1ZTA1MDMyMzY4c2X49g==: --dhchap-ctrl-secret DHHC-1:01:OWI2YTRmODIwODQ4MGIxNTQzYjY0YTdkMWJlYzU0NzEP+U4Y: 00:23:40.944 10:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:40.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:40.944 10:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:23:40.944 10:04:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.944 10:04:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:40.944 10:04:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.944 10:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:40.944 10:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:40.944 10:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:41.202 10:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:23:41.202 10:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:41.202 10:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:41.202 10:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:41.202 10:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:41.202 10:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:41.202 10:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key3 00:23:41.202 10:04:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.202 10:04:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:41.202 10:04:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.202 10:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:41.202 10:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:41.769 00:23:41.769 10:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:41.769 10:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:41.769 10:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:42.066 10:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.066 10:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:42.066 10:04:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.066 10:04:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:42.066 10:04:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.066 10:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:42.066 { 00:23:42.066 "auth": { 00:23:42.066 "dhgroup": "ffdhe8192", 00:23:42.066 "digest": "sha384", 00:23:42.066 "state": "completed" 00:23:42.066 }, 00:23:42.066 "cntlid": 95, 00:23:42.066 "listen_address": { 00:23:42.066 "adrfam": "IPv4", 00:23:42.066 "traddr": "10.0.0.2", 00:23:42.066 "trsvcid": "4420", 00:23:42.066 "trtype": "TCP" 00:23:42.066 }, 00:23:42.066 "peer_address": { 00:23:42.066 "adrfam": "IPv4", 00:23:42.066 "traddr": "10.0.0.1", 00:23:42.066 "trsvcid": "43596", 00:23:42.066 "trtype": "TCP" 00:23:42.066 }, 00:23:42.066 "qid": 0, 00:23:42.066 "state": "enabled", 00:23:42.066 "thread": "nvmf_tgt_poll_group_000" 00:23:42.066 } 00:23:42.066 ]' 00:23:42.066 10:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:42.066 10:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:42.066 10:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:42.066 10:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:42.066 10:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:42.066 10:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:42.066 10:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:42.066 10:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:42.326 10:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:03:YzJiYjU3YjhkZjhmNTcwYjY2OGFkYmY4ZDc5MTFjOWZmMDUwZTM5OTA3MmQxZGE5Nzg2M2IzMjAxNDNhYjM4Mgzy6hM=: 00:23:42.895 10:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:42.895 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:42.895 10:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:23:42.895 10:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.895 10:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:42.895 10:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.895 10:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:23:42.895 10:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:42.895 10:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:42.895 10:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:42.895 10:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:43.155 10:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:23:43.155 10:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:43.155 10:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:43.155 10:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:23:43.155 10:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:43.155 10:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:43.155 10:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:43.155 10:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.155 10:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:43.155 10:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.155 10:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:43.155 10:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:43.415 00:23:43.415 10:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:43.415 10:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:43.415 10:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:43.675 10:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.675 10:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:43.675 10:04:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.675 10:04:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:43.675 10:04:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.675 10:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:43.675 { 00:23:43.675 "auth": { 00:23:43.675 "dhgroup": "null", 00:23:43.675 "digest": "sha512", 00:23:43.675 "state": "completed" 00:23:43.675 }, 00:23:43.675 "cntlid": 97, 00:23:43.675 "listen_address": { 00:23:43.675 "adrfam": "IPv4", 00:23:43.675 "traddr": "10.0.0.2", 00:23:43.675 "trsvcid": "4420", 00:23:43.675 "trtype": "TCP" 00:23:43.675 }, 00:23:43.675 "peer_address": { 00:23:43.675 "adrfam": "IPv4", 00:23:43.675 "traddr": "10.0.0.1", 00:23:43.675 "trsvcid": "43614", 00:23:43.675 "trtype": "TCP" 00:23:43.675 }, 00:23:43.675 "qid": 0, 00:23:43.675 "state": "enabled", 00:23:43.675 "thread": "nvmf_tgt_poll_group_000" 00:23:43.675 } 00:23:43.675 ]' 00:23:43.675 10:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:43.675 10:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:43.676 10:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:43.676 10:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:23:43.676 10:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:43.676 10:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:43.676 10:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:43.676 10:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:43.935 10:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:00:NWQxMzZmNWQwNzExZTJlMmFlZjYyZTRlZjNhY2ZkOGQ1N2U4NDExZWJlYjNhZmFl2ifiLg==: --dhchap-ctrl-secret DHHC-1:03:NTM0YWI3NDk4YjFmMzczMjJkYjJkNjdmZmE1ZDU2OTU0ZjAzNjM0MzUyOGFhMDc4MmQxMjliYjEyYmM0MTFmMrepI4o=: 00:23:44.502 10:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:44.502 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:44.503 10:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:23:44.503 10:04:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.503 10:04:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:44.503 10:04:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.503 10:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:44.503 10:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:44.503 10:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:44.762 10:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:23:44.762 10:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:44.762 10:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:44.762 10:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:23:44.762 10:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:44.762 10:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:44.762 10:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:44.762 10:04:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.762 10:04:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:44.762 10:04:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.762 10:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:44.762 10:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:45.021 00:23:45.021 10:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:45.021 10:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:45.021 10:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:45.280 10:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:45.280 10:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:45.280 10:04:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.280 10:04:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:45.280 10:04:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.280 10:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:45.280 { 00:23:45.280 "auth": { 00:23:45.280 "dhgroup": "null", 00:23:45.280 "digest": "sha512", 00:23:45.280 "state": "completed" 00:23:45.280 }, 00:23:45.280 "cntlid": 99, 00:23:45.280 "listen_address": { 00:23:45.280 "adrfam": "IPv4", 00:23:45.280 "traddr": "10.0.0.2", 00:23:45.280 "trsvcid": "4420", 00:23:45.280 "trtype": "TCP" 00:23:45.280 }, 00:23:45.280 "peer_address": { 00:23:45.280 "adrfam": "IPv4", 00:23:45.280 "traddr": "10.0.0.1", 00:23:45.280 "trsvcid": "43656", 00:23:45.280 "trtype": "TCP" 00:23:45.280 }, 00:23:45.280 "qid": 0, 00:23:45.280 "state": "enabled", 00:23:45.280 "thread": "nvmf_tgt_poll_group_000" 00:23:45.280 } 00:23:45.280 ]' 00:23:45.280 10:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:45.280 10:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:45.280 10:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:45.539 10:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:23:45.539 10:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:45.539 10:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:45.539 10:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:45.539 10:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:45.813 10:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:01:YzNiODQ3NTllNGRkNjkzZWNkZGIyOTVjZTkwMGI1OTNQ924+: --dhchap-ctrl-secret DHHC-1:02:YjczYTQ1MGVmY2EyYWFkNTM0NzE5OGNjOTk2ZDJjZmYyMjYyODliZDNhNzYxZmUxbh2Sew==: 00:23:46.379 10:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:46.379 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:46.379 10:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:23:46.379 10:04:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.379 10:04:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:46.379 10:04:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.379 10:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:46.379 10:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:46.379 10:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:46.379 10:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:23:46.379 10:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:46.379 10:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:46.379 10:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:23:46.379 10:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:46.379 10:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:46.379 10:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:46.379 10:04:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.379 10:04:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:46.379 10:04:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.379 10:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:46.379 10:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:46.638 00:23:46.638 10:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:46.638 10:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:46.638 10:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:46.897 10:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.897 10:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:46.897 10:05:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.897 10:05:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:46.897 10:05:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.897 10:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:46.897 { 00:23:46.897 "auth": { 00:23:46.897 "dhgroup": "null", 00:23:46.897 "digest": "sha512", 00:23:46.897 "state": "completed" 00:23:46.897 }, 00:23:46.897 "cntlid": 101, 00:23:46.897 "listen_address": { 00:23:46.897 "adrfam": "IPv4", 00:23:46.897 "traddr": "10.0.0.2", 00:23:46.897 "trsvcid": "4420", 00:23:46.897 "trtype": "TCP" 00:23:46.897 }, 00:23:46.897 "peer_address": { 00:23:46.897 "adrfam": "IPv4", 00:23:46.897 "traddr": "10.0.0.1", 00:23:46.897 "trsvcid": "43696", 00:23:46.897 "trtype": "TCP" 00:23:46.897 }, 00:23:46.897 "qid": 0, 00:23:46.897 "state": "enabled", 00:23:46.897 "thread": "nvmf_tgt_poll_group_000" 00:23:46.897 } 00:23:46.897 ]' 00:23:46.897 10:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:47.155 10:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:47.155 10:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:47.155 10:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:23:47.155 10:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:47.155 10:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:47.155 10:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:47.155 10:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:47.413 10:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:02:NjU0ZjNjOWFkMGViNDg5OTZjOGQxNGI0NjE3ZDIxZTVkYjZiMWI1ZTA1MDMyMzY4c2X49g==: --dhchap-ctrl-secret DHHC-1:01:OWI2YTRmODIwODQ4MGIxNTQzYjY0YTdkMWJlYzU0NzEP+U4Y: 00:23:47.980 10:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:47.980 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:47.980 10:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:23:47.980 10:05:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.980 10:05:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:47.980 10:05:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.980 10:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:47.980 10:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:47.980 10:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:48.240 10:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:23:48.240 10:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:48.240 10:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:48.240 10:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:23:48.240 10:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:48.240 10:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:48.240 10:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key3 00:23:48.240 10:05:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.240 10:05:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:48.240 10:05:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.240 10:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:48.240 10:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:48.499 00:23:48.499 10:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:48.499 10:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:48.499 10:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:48.758 10:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:48.758 10:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:48.758 10:05:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.758 10:05:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:48.758 10:05:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.758 10:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:48.758 { 00:23:48.758 "auth": { 00:23:48.758 "dhgroup": "null", 00:23:48.758 "digest": "sha512", 00:23:48.758 "state": "completed" 00:23:48.758 }, 00:23:48.758 "cntlid": 103, 00:23:48.758 "listen_address": { 00:23:48.758 "adrfam": "IPv4", 00:23:48.758 "traddr": "10.0.0.2", 00:23:48.758 "trsvcid": "4420", 00:23:48.758 "trtype": "TCP" 00:23:48.758 }, 00:23:48.758 "peer_address": { 00:23:48.758 "adrfam": "IPv4", 00:23:48.758 "traddr": "10.0.0.1", 00:23:48.758 "trsvcid": "36880", 00:23:48.758 "trtype": "TCP" 00:23:48.758 }, 00:23:48.758 "qid": 0, 00:23:48.758 "state": "enabled", 00:23:48.758 "thread": "nvmf_tgt_poll_group_000" 00:23:48.758 } 00:23:48.758 ]' 00:23:48.758 10:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:48.758 10:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:48.758 10:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:48.758 10:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:23:48.758 10:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:48.758 10:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:48.758 10:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:48.758 10:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:49.017 10:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:03:YzJiYjU3YjhkZjhmNTcwYjY2OGFkYmY4ZDc5MTFjOWZmMDUwZTM5OTA3MmQxZGE5Nzg2M2IzMjAxNDNhYjM4Mgzy6hM=: 00:23:49.586 10:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:49.586 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:49.586 10:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:23:49.586 10:05:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.586 10:05:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:49.586 10:05:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.586 10:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:49.586 10:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:49.586 10:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:49.586 10:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:49.845 10:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:23:49.845 10:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:49.845 10:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:49.845 10:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:23:49.845 10:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:49.845 10:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:49.845 10:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:49.845 10:05:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.845 10:05:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:49.845 10:05:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.845 10:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:49.845 10:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:50.105 00:23:50.105 10:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:50.105 10:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:50.105 10:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:50.365 10:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.365 10:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:50.365 10:05:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.365 10:05:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:50.365 10:05:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.365 10:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:50.365 { 00:23:50.365 "auth": { 00:23:50.365 "dhgroup": "ffdhe2048", 00:23:50.365 "digest": "sha512", 00:23:50.365 "state": "completed" 00:23:50.365 }, 00:23:50.365 "cntlid": 105, 00:23:50.365 "listen_address": { 00:23:50.365 "adrfam": "IPv4", 00:23:50.365 "traddr": "10.0.0.2", 00:23:50.365 "trsvcid": "4420", 00:23:50.365 "trtype": "TCP" 00:23:50.365 }, 00:23:50.365 "peer_address": { 00:23:50.365 "adrfam": "IPv4", 00:23:50.365 "traddr": "10.0.0.1", 00:23:50.365 "trsvcid": "36900", 00:23:50.365 "trtype": "TCP" 00:23:50.365 }, 00:23:50.365 "qid": 0, 00:23:50.365 "state": "enabled", 00:23:50.365 "thread": "nvmf_tgt_poll_group_000" 00:23:50.365 } 00:23:50.365 ]' 00:23:50.365 10:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:50.365 10:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:50.624 10:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:50.624 10:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:50.624 10:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:50.624 10:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:50.624 10:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:50.624 10:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:50.884 10:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:00:NWQxMzZmNWQwNzExZTJlMmFlZjYyZTRlZjNhY2ZkOGQ1N2U4NDExZWJlYjNhZmFl2ifiLg==: --dhchap-ctrl-secret DHHC-1:03:NTM0YWI3NDk4YjFmMzczMjJkYjJkNjdmZmE1ZDU2OTU0ZjAzNjM0MzUyOGFhMDc4MmQxMjliYjEyYmM0MTFmMrepI4o=: 00:23:51.452 10:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:51.452 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:51.452 10:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:23:51.452 10:05:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.452 10:05:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:51.452 10:05:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.452 10:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:51.452 10:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:51.452 10:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:51.712 10:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:23:51.712 10:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:51.712 10:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:51.712 10:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:23:51.712 10:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:51.712 10:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:51.712 10:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:51.712 10:05:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.712 10:05:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:51.712 10:05:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.712 10:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:51.712 10:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:51.970 00:23:51.970 10:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:51.971 10:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:51.971 10:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:52.230 10:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.230 10:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:52.230 10:05:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.230 10:05:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:52.230 10:05:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.230 10:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:52.230 { 00:23:52.230 "auth": { 00:23:52.230 "dhgroup": "ffdhe2048", 00:23:52.230 "digest": "sha512", 00:23:52.230 "state": "completed" 00:23:52.230 }, 00:23:52.230 "cntlid": 107, 00:23:52.230 "listen_address": { 00:23:52.230 "adrfam": "IPv4", 00:23:52.230 "traddr": "10.0.0.2", 00:23:52.230 "trsvcid": "4420", 00:23:52.230 "trtype": "TCP" 00:23:52.230 }, 00:23:52.230 "peer_address": { 00:23:52.230 "adrfam": "IPv4", 00:23:52.230 "traddr": "10.0.0.1", 00:23:52.230 "trsvcid": "36930", 00:23:52.230 "trtype": "TCP" 00:23:52.230 }, 00:23:52.230 "qid": 0, 00:23:52.230 "state": "enabled", 00:23:52.230 "thread": "nvmf_tgt_poll_group_000" 00:23:52.230 } 00:23:52.230 ]' 00:23:52.230 10:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:52.230 10:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:52.230 10:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:52.230 10:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:52.230 10:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:52.230 10:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:52.230 10:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:52.230 10:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:52.496 10:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:01:YzNiODQ3NTllNGRkNjkzZWNkZGIyOTVjZTkwMGI1OTNQ924+: --dhchap-ctrl-secret DHHC-1:02:YjczYTQ1MGVmY2EyYWFkNTM0NzE5OGNjOTk2ZDJjZmYyMjYyODliZDNhNzYxZmUxbh2Sew==: 00:23:53.066 10:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:53.066 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:53.066 10:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:23:53.066 10:05:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.066 10:05:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:53.066 10:05:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.066 10:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:53.066 10:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:53.066 10:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:53.326 10:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:23:53.326 10:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:53.326 10:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:53.326 10:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:23:53.326 10:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:53.326 10:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:53.326 10:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:53.326 10:05:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.326 10:05:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:53.326 10:05:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.326 10:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:53.326 10:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:53.586 00:23:53.586 10:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:53.586 10:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:53.586 10:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:53.845 10:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:53.845 10:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:53.845 10:05:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.845 10:05:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:53.845 10:05:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.845 10:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:53.845 { 00:23:53.845 "auth": { 00:23:53.845 "dhgroup": "ffdhe2048", 00:23:53.845 "digest": "sha512", 00:23:53.845 "state": "completed" 00:23:53.845 }, 00:23:53.845 "cntlid": 109, 00:23:53.845 "listen_address": { 00:23:53.845 "adrfam": "IPv4", 00:23:53.845 "traddr": "10.0.0.2", 00:23:53.845 "trsvcid": "4420", 00:23:53.845 "trtype": "TCP" 00:23:53.845 }, 00:23:53.845 "peer_address": { 00:23:53.845 "adrfam": "IPv4", 00:23:53.845 "traddr": "10.0.0.1", 00:23:53.845 "trsvcid": "36964", 00:23:53.845 "trtype": "TCP" 00:23:53.845 }, 00:23:53.845 "qid": 0, 00:23:53.845 "state": "enabled", 00:23:53.845 "thread": "nvmf_tgt_poll_group_000" 00:23:53.845 } 00:23:53.845 ]' 00:23:53.845 10:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:53.845 10:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:53.845 10:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:54.105 10:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:54.105 10:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:54.105 10:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:54.105 10:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:54.105 10:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:54.364 10:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:02:NjU0ZjNjOWFkMGViNDg5OTZjOGQxNGI0NjE3ZDIxZTVkYjZiMWI1ZTA1MDMyMzY4c2X49g==: --dhchap-ctrl-secret DHHC-1:01:OWI2YTRmODIwODQ4MGIxNTQzYjY0YTdkMWJlYzU0NzEP+U4Y: 00:23:54.951 10:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:54.951 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:54.951 10:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:23:54.951 10:05:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.951 10:05:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:54.951 10:05:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.951 10:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:54.951 10:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:54.951 10:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:54.951 10:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:23:54.951 10:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:54.951 10:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:54.951 10:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:23:54.951 10:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:54.951 10:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:54.951 10:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key3 00:23:54.951 10:05:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.951 10:05:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:54.951 10:05:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.951 10:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:54.951 10:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:55.209 00:23:55.469 10:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:55.469 10:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:55.469 10:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:55.469 10:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:55.469 10:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:55.469 10:05:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.469 10:05:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:55.469 10:05:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.469 10:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:55.469 { 00:23:55.469 "auth": { 00:23:55.469 "dhgroup": "ffdhe2048", 00:23:55.469 "digest": "sha512", 00:23:55.469 "state": "completed" 00:23:55.469 }, 00:23:55.469 "cntlid": 111, 00:23:55.469 "listen_address": { 00:23:55.469 "adrfam": "IPv4", 00:23:55.469 "traddr": "10.0.0.2", 00:23:55.469 "trsvcid": "4420", 00:23:55.469 "trtype": "TCP" 00:23:55.469 }, 00:23:55.469 "peer_address": { 00:23:55.469 "adrfam": "IPv4", 00:23:55.469 "traddr": "10.0.0.1", 00:23:55.469 "trsvcid": "36996", 00:23:55.469 "trtype": "TCP" 00:23:55.469 }, 00:23:55.469 "qid": 0, 00:23:55.469 "state": "enabled", 00:23:55.469 "thread": "nvmf_tgt_poll_group_000" 00:23:55.469 } 00:23:55.469 ]' 00:23:55.469 10:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:55.729 10:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:55.729 10:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:55.729 10:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:55.729 10:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:55.729 10:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:55.729 10:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:55.729 10:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:55.989 10:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:03:YzJiYjU3YjhkZjhmNTcwYjY2OGFkYmY4ZDc5MTFjOWZmMDUwZTM5OTA3MmQxZGE5Nzg2M2IzMjAxNDNhYjM4Mgzy6hM=: 00:23:56.557 10:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:56.557 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:56.557 10:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:23:56.557 10:05:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.557 10:05:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:56.557 10:05:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.557 10:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:56.557 10:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:56.557 10:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:56.557 10:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:56.816 10:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:23:56.816 10:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:56.816 10:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:56.816 10:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:23:56.816 10:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:56.816 10:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:56.816 10:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:56.816 10:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.816 10:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:56.816 10:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.816 10:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:56.816 10:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:57.077 00:23:57.077 10:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:57.077 10:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:57.077 10:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:57.337 10:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:57.337 10:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:57.337 10:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.337 10:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:57.337 10:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.337 10:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:57.337 { 00:23:57.337 "auth": { 00:23:57.337 "dhgroup": "ffdhe3072", 00:23:57.337 "digest": "sha512", 00:23:57.337 "state": "completed" 00:23:57.337 }, 00:23:57.337 "cntlid": 113, 00:23:57.337 "listen_address": { 00:23:57.337 "adrfam": "IPv4", 00:23:57.337 "traddr": "10.0.0.2", 00:23:57.337 "trsvcid": "4420", 00:23:57.337 "trtype": "TCP" 00:23:57.337 }, 00:23:57.337 "peer_address": { 00:23:57.337 "adrfam": "IPv4", 00:23:57.337 "traddr": "10.0.0.1", 00:23:57.337 "trsvcid": "37024", 00:23:57.337 "trtype": "TCP" 00:23:57.337 }, 00:23:57.337 "qid": 0, 00:23:57.337 "state": "enabled", 00:23:57.337 "thread": "nvmf_tgt_poll_group_000" 00:23:57.337 } 00:23:57.337 ]' 00:23:57.337 10:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:57.337 10:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:57.337 10:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:57.337 10:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:57.337 10:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:57.337 10:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:57.337 10:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:57.337 10:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:57.597 10:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:00:NWQxMzZmNWQwNzExZTJlMmFlZjYyZTRlZjNhY2ZkOGQ1N2U4NDExZWJlYjNhZmFl2ifiLg==: --dhchap-ctrl-secret DHHC-1:03:NTM0YWI3NDk4YjFmMzczMjJkYjJkNjdmZmE1ZDU2OTU0ZjAzNjM0MzUyOGFhMDc4MmQxMjliYjEyYmM0MTFmMrepI4o=: 00:23:58.165 10:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:58.165 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:58.165 10:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:23:58.165 10:05:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.165 10:05:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:58.166 10:05:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.166 10:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:58.166 10:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:58.166 10:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:58.425 10:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:23:58.425 10:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:58.425 10:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:58.425 10:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:23:58.425 10:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:58.425 10:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:58.425 10:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:58.425 10:05:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.425 10:05:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:58.425 10:05:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.425 10:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:58.425 10:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:58.685 00:23:58.945 10:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:58.945 10:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:58.945 10:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:58.945 10:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:58.945 10:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:58.945 10:05:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.945 10:05:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:58.945 10:05:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.945 10:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:58.945 { 00:23:58.945 "auth": { 00:23:58.945 "dhgroup": "ffdhe3072", 00:23:58.945 "digest": "sha512", 00:23:58.945 "state": "completed" 00:23:58.945 }, 00:23:58.945 "cntlid": 115, 00:23:58.945 "listen_address": { 00:23:58.945 "adrfam": "IPv4", 00:23:58.945 "traddr": "10.0.0.2", 00:23:58.945 "trsvcid": "4420", 00:23:58.945 "trtype": "TCP" 00:23:58.945 }, 00:23:58.945 "peer_address": { 00:23:58.945 "adrfam": "IPv4", 00:23:58.945 "traddr": "10.0.0.1", 00:23:58.945 "trsvcid": "47254", 00:23:58.945 "trtype": "TCP" 00:23:58.945 }, 00:23:58.945 "qid": 0, 00:23:58.945 "state": "enabled", 00:23:58.945 "thread": "nvmf_tgt_poll_group_000" 00:23:58.945 } 00:23:58.945 ]' 00:23:58.945 10:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:59.204 10:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:59.204 10:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:59.204 10:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:59.204 10:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:59.204 10:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:59.204 10:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:59.204 10:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:59.463 10:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:01:YzNiODQ3NTllNGRkNjkzZWNkZGIyOTVjZTkwMGI1OTNQ924+: --dhchap-ctrl-secret DHHC-1:02:YjczYTQ1MGVmY2EyYWFkNTM0NzE5OGNjOTk2ZDJjZmYyMjYyODliZDNhNzYxZmUxbh2Sew==: 00:24:00.032 10:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:00.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:00.032 10:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:24:00.032 10:05:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.032 10:05:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:00.032 10:05:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.032 10:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:00.032 10:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:00.032 10:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:00.292 10:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:24:00.292 10:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:00.292 10:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:00.292 10:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:24:00.292 10:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:24:00.292 10:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:00.292 10:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:00.292 10:05:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.292 10:05:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:00.292 10:05:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.292 10:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:00.292 10:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:00.576 00:24:00.576 10:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:00.576 10:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:00.576 10:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:00.835 10:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.835 10:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:00.835 10:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.835 10:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:00.835 10:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.835 10:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:00.835 { 00:24:00.835 "auth": { 00:24:00.835 "dhgroup": "ffdhe3072", 00:24:00.835 "digest": "sha512", 00:24:00.835 "state": "completed" 00:24:00.835 }, 00:24:00.835 "cntlid": 117, 00:24:00.835 "listen_address": { 00:24:00.835 "adrfam": "IPv4", 00:24:00.835 "traddr": "10.0.0.2", 00:24:00.835 "trsvcid": "4420", 00:24:00.835 "trtype": "TCP" 00:24:00.835 }, 00:24:00.835 "peer_address": { 00:24:00.835 "adrfam": "IPv4", 00:24:00.835 "traddr": "10.0.0.1", 00:24:00.835 "trsvcid": "47270", 00:24:00.835 "trtype": "TCP" 00:24:00.835 }, 00:24:00.835 "qid": 0, 00:24:00.835 "state": "enabled", 00:24:00.835 "thread": "nvmf_tgt_poll_group_000" 00:24:00.835 } 00:24:00.835 ]' 00:24:00.835 10:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:00.835 10:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:00.835 10:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:01.095 10:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:01.095 10:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:01.095 10:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:01.095 10:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:01.095 10:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:01.354 10:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:02:NjU0ZjNjOWFkMGViNDg5OTZjOGQxNGI0NjE3ZDIxZTVkYjZiMWI1ZTA1MDMyMzY4c2X49g==: --dhchap-ctrl-secret DHHC-1:01:OWI2YTRmODIwODQ4MGIxNTQzYjY0YTdkMWJlYzU0NzEP+U4Y: 00:24:01.922 10:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:01.922 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:01.922 10:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:24:01.922 10:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.922 10:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:01.922 10:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.922 10:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:01.922 10:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:01.922 10:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:02.182 10:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:24:02.182 10:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:02.182 10:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:02.182 10:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:24:02.182 10:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:02.182 10:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:02.182 10:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key3 00:24:02.182 10:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.182 10:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:02.182 10:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.182 10:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:02.182 10:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:02.442 00:24:02.442 10:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:02.442 10:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:02.442 10:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:02.701 10:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:02.701 10:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:02.701 10:05:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.701 10:05:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:02.701 10:05:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.701 10:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:02.701 { 00:24:02.701 "auth": { 00:24:02.701 "dhgroup": "ffdhe3072", 00:24:02.701 "digest": "sha512", 00:24:02.701 "state": "completed" 00:24:02.701 }, 00:24:02.701 "cntlid": 119, 00:24:02.701 "listen_address": { 00:24:02.701 "adrfam": "IPv4", 00:24:02.701 "traddr": "10.0.0.2", 00:24:02.701 "trsvcid": "4420", 00:24:02.701 "trtype": "TCP" 00:24:02.701 }, 00:24:02.701 "peer_address": { 00:24:02.701 "adrfam": "IPv4", 00:24:02.701 "traddr": "10.0.0.1", 00:24:02.701 "trsvcid": "47292", 00:24:02.701 "trtype": "TCP" 00:24:02.701 }, 00:24:02.701 "qid": 0, 00:24:02.701 "state": "enabled", 00:24:02.701 "thread": "nvmf_tgt_poll_group_000" 00:24:02.701 } 00:24:02.701 ]' 00:24:02.701 10:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:02.701 10:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:02.701 10:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:02.701 10:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:02.701 10:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:02.960 10:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:02.960 10:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:02.960 10:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:02.960 10:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:03:YzJiYjU3YjhkZjhmNTcwYjY2OGFkYmY4ZDc5MTFjOWZmMDUwZTM5OTA3MmQxZGE5Nzg2M2IzMjAxNDNhYjM4Mgzy6hM=: 00:24:03.899 10:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:03.899 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:03.899 10:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:24:03.899 10:05:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.899 10:05:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:03.899 10:05:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.899 10:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:24:03.899 10:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:03.899 10:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:03.899 10:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:03.899 10:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:24:03.899 10:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:03.899 10:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:03.899 10:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:24:03.899 10:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:03.899 10:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:03.899 10:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:03.899 10:05:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.899 10:05:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:03.899 10:05:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.899 10:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:03.899 10:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:04.468 00:24:04.468 10:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:04.468 10:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:04.468 10:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:04.468 10:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:04.468 10:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:04.468 10:05:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.468 10:05:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:04.468 10:05:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.468 10:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:04.468 { 00:24:04.468 "auth": { 00:24:04.468 "dhgroup": "ffdhe4096", 00:24:04.468 "digest": "sha512", 00:24:04.468 "state": "completed" 00:24:04.468 }, 00:24:04.468 "cntlid": 121, 00:24:04.468 "listen_address": { 00:24:04.468 "adrfam": "IPv4", 00:24:04.468 "traddr": "10.0.0.2", 00:24:04.468 "trsvcid": "4420", 00:24:04.468 "trtype": "TCP" 00:24:04.468 }, 00:24:04.468 "peer_address": { 00:24:04.468 "adrfam": "IPv4", 00:24:04.468 "traddr": "10.0.0.1", 00:24:04.468 "trsvcid": "47326", 00:24:04.468 "trtype": "TCP" 00:24:04.468 }, 00:24:04.468 "qid": 0, 00:24:04.468 "state": "enabled", 00:24:04.468 "thread": "nvmf_tgt_poll_group_000" 00:24:04.468 } 00:24:04.468 ]' 00:24:04.468 10:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:04.468 10:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:04.468 10:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:04.728 10:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:04.728 10:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:04.728 10:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:04.728 10:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:04.728 10:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:04.988 10:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:00:NWQxMzZmNWQwNzExZTJlMmFlZjYyZTRlZjNhY2ZkOGQ1N2U4NDExZWJlYjNhZmFl2ifiLg==: --dhchap-ctrl-secret DHHC-1:03:NTM0YWI3NDk4YjFmMzczMjJkYjJkNjdmZmE1ZDU2OTU0ZjAzNjM0MzUyOGFhMDc4MmQxMjliYjEyYmM0MTFmMrepI4o=: 00:24:05.554 10:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:05.554 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:05.554 10:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:24:05.554 10:05:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.554 10:05:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:05.554 10:05:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.554 10:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:05.554 10:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:05.554 10:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:05.811 10:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:24:05.812 10:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:05.812 10:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:05.812 10:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:24:05.812 10:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:24:05.812 10:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:05.812 10:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:05.812 10:05:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.812 10:05:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:05.812 10:05:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.812 10:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:05.812 10:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:06.069 00:24:06.069 10:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:06.069 10:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:06.069 10:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:06.326 10:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:06.326 10:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:06.326 10:05:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.326 10:05:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:06.326 10:05:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.326 10:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:06.326 { 00:24:06.326 "auth": { 00:24:06.326 "dhgroup": "ffdhe4096", 00:24:06.326 "digest": "sha512", 00:24:06.326 "state": "completed" 00:24:06.326 }, 00:24:06.326 "cntlid": 123, 00:24:06.326 "listen_address": { 00:24:06.326 "adrfam": "IPv4", 00:24:06.326 "traddr": "10.0.0.2", 00:24:06.326 "trsvcid": "4420", 00:24:06.326 "trtype": "TCP" 00:24:06.326 }, 00:24:06.326 "peer_address": { 00:24:06.326 "adrfam": "IPv4", 00:24:06.326 "traddr": "10.0.0.1", 00:24:06.326 "trsvcid": "47350", 00:24:06.326 "trtype": "TCP" 00:24:06.326 }, 00:24:06.326 "qid": 0, 00:24:06.326 "state": "enabled", 00:24:06.326 "thread": "nvmf_tgt_poll_group_000" 00:24:06.326 } 00:24:06.326 ]' 00:24:06.326 10:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:06.326 10:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:06.326 10:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:06.326 10:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:06.326 10:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:06.585 10:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:06.585 10:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:06.585 10:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:06.843 10:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:01:YzNiODQ3NTllNGRkNjkzZWNkZGIyOTVjZTkwMGI1OTNQ924+: --dhchap-ctrl-secret DHHC-1:02:YjczYTQ1MGVmY2EyYWFkNTM0NzE5OGNjOTk2ZDJjZmYyMjYyODliZDNhNzYxZmUxbh2Sew==: 00:24:07.465 10:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:07.465 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:07.465 10:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:24:07.465 10:05:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.465 10:05:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:07.465 10:05:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.465 10:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:07.465 10:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:07.465 10:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:07.465 10:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:24:07.465 10:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:07.465 10:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:07.466 10:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:24:07.466 10:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:24:07.466 10:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:07.466 10:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:07.466 10:05:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.466 10:05:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:07.466 10:05:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.466 10:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:07.466 10:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:08.045 00:24:08.045 10:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:08.045 10:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:08.045 10:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:08.045 10:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.045 10:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:08.045 10:05:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.045 10:05:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:08.045 10:05:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.045 10:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:08.045 { 00:24:08.045 "auth": { 00:24:08.045 "dhgroup": "ffdhe4096", 00:24:08.045 "digest": "sha512", 00:24:08.045 "state": "completed" 00:24:08.045 }, 00:24:08.045 "cntlid": 125, 00:24:08.045 "listen_address": { 00:24:08.045 "adrfam": "IPv4", 00:24:08.045 "traddr": "10.0.0.2", 00:24:08.045 "trsvcid": "4420", 00:24:08.045 "trtype": "TCP" 00:24:08.045 }, 00:24:08.045 "peer_address": { 00:24:08.045 "adrfam": "IPv4", 00:24:08.045 "traddr": "10.0.0.1", 00:24:08.045 "trsvcid": "34618", 00:24:08.045 "trtype": "TCP" 00:24:08.045 }, 00:24:08.045 "qid": 0, 00:24:08.045 "state": "enabled", 00:24:08.045 "thread": "nvmf_tgt_poll_group_000" 00:24:08.045 } 00:24:08.045 ]' 00:24:08.045 10:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:08.303 10:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:08.303 10:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:08.303 10:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:08.303 10:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:08.303 10:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:08.303 10:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:08.303 10:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:08.561 10:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:02:NjU0ZjNjOWFkMGViNDg5OTZjOGQxNGI0NjE3ZDIxZTVkYjZiMWI1ZTA1MDMyMzY4c2X49g==: --dhchap-ctrl-secret DHHC-1:01:OWI2YTRmODIwODQ4MGIxNTQzYjY0YTdkMWJlYzU0NzEP+U4Y: 00:24:09.129 10:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:09.129 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:09.129 10:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:24:09.129 10:05:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.129 10:05:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:09.129 10:05:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.129 10:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:09.129 10:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:09.129 10:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:09.388 10:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:24:09.388 10:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:09.388 10:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:09.388 10:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:24:09.388 10:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:09.388 10:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:09.388 10:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key3 00:24:09.388 10:05:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.388 10:05:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:09.388 10:05:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.388 10:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:09.388 10:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:09.646 00:24:09.646 10:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:09.646 10:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:09.646 10:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:09.904 10:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:09.904 10:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:09.904 10:05:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.904 10:05:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:09.904 10:05:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.904 10:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:09.904 { 00:24:09.904 "auth": { 00:24:09.904 "dhgroup": "ffdhe4096", 00:24:09.904 "digest": "sha512", 00:24:09.904 "state": "completed" 00:24:09.904 }, 00:24:09.904 "cntlid": 127, 00:24:09.904 "listen_address": { 00:24:09.904 "adrfam": "IPv4", 00:24:09.904 "traddr": "10.0.0.2", 00:24:09.904 "trsvcid": "4420", 00:24:09.904 "trtype": "TCP" 00:24:09.904 }, 00:24:09.904 "peer_address": { 00:24:09.904 "adrfam": "IPv4", 00:24:09.904 "traddr": "10.0.0.1", 00:24:09.904 "trsvcid": "34654", 00:24:09.904 "trtype": "TCP" 00:24:09.904 }, 00:24:09.904 "qid": 0, 00:24:09.904 "state": "enabled", 00:24:09.904 "thread": "nvmf_tgt_poll_group_000" 00:24:09.904 } 00:24:09.904 ]' 00:24:09.904 10:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:09.904 10:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:09.904 10:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:09.904 10:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:09.904 10:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:09.904 10:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:09.904 10:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:09.904 10:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:10.162 10:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:03:YzJiYjU3YjhkZjhmNTcwYjY2OGFkYmY4ZDc5MTFjOWZmMDUwZTM5OTA3MmQxZGE5Nzg2M2IzMjAxNDNhYjM4Mgzy6hM=: 00:24:10.727 10:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:10.727 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:10.727 10:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:24:10.727 10:05:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.727 10:05:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:10.727 10:05:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.727 10:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:24:10.727 10:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:10.727 10:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:10.727 10:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:10.984 10:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:24:10.984 10:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:10.984 10:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:10.984 10:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:24:10.984 10:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:10.984 10:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:10.984 10:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:10.984 10:05:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.984 10:05:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:10.984 10:05:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.984 10:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:10.984 10:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:11.320 00:24:11.578 10:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:11.578 10:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:11.578 10:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:11.835 10:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.835 10:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:11.835 10:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.835 10:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:11.835 10:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.835 10:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:11.835 { 00:24:11.835 "auth": { 00:24:11.835 "dhgroup": "ffdhe6144", 00:24:11.835 "digest": "sha512", 00:24:11.835 "state": "completed" 00:24:11.835 }, 00:24:11.835 "cntlid": 129, 00:24:11.835 "listen_address": { 00:24:11.835 "adrfam": "IPv4", 00:24:11.835 "traddr": "10.0.0.2", 00:24:11.835 "trsvcid": "4420", 00:24:11.835 "trtype": "TCP" 00:24:11.836 }, 00:24:11.836 "peer_address": { 00:24:11.836 "adrfam": "IPv4", 00:24:11.836 "traddr": "10.0.0.1", 00:24:11.836 "trsvcid": "34678", 00:24:11.836 "trtype": "TCP" 00:24:11.836 }, 00:24:11.836 "qid": 0, 00:24:11.836 "state": "enabled", 00:24:11.836 "thread": "nvmf_tgt_poll_group_000" 00:24:11.836 } 00:24:11.836 ]' 00:24:11.836 10:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:11.836 10:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:11.836 10:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:11.836 10:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:11.836 10:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:11.836 10:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:11.836 10:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:11.836 10:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:12.094 10:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:00:NWQxMzZmNWQwNzExZTJlMmFlZjYyZTRlZjNhY2ZkOGQ1N2U4NDExZWJlYjNhZmFl2ifiLg==: --dhchap-ctrl-secret DHHC-1:03:NTM0YWI3NDk4YjFmMzczMjJkYjJkNjdmZmE1ZDU2OTU0ZjAzNjM0MzUyOGFhMDc4MmQxMjliYjEyYmM0MTFmMrepI4o=: 00:24:12.662 10:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:12.662 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:12.662 10:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:24:12.662 10:05:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.663 10:05:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:12.663 10:05:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.663 10:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:12.663 10:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:12.663 10:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:12.921 10:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:24:12.921 10:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:12.921 10:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:12.921 10:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:24:12.921 10:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:24:12.921 10:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:12.921 10:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:12.921 10:05:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.921 10:05:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:12.921 10:05:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.921 10:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:12.921 10:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:13.488 00:24:13.489 10:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:13.489 10:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:13.489 10:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:13.747 10:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.747 10:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:13.747 10:05:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.747 10:05:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:13.747 10:05:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.747 10:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:13.747 { 00:24:13.747 "auth": { 00:24:13.747 "dhgroup": "ffdhe6144", 00:24:13.747 "digest": "sha512", 00:24:13.747 "state": "completed" 00:24:13.747 }, 00:24:13.747 "cntlid": 131, 00:24:13.747 "listen_address": { 00:24:13.747 "adrfam": "IPv4", 00:24:13.747 "traddr": "10.0.0.2", 00:24:13.747 "trsvcid": "4420", 00:24:13.747 "trtype": "TCP" 00:24:13.747 }, 00:24:13.747 "peer_address": { 00:24:13.748 "adrfam": "IPv4", 00:24:13.748 "traddr": "10.0.0.1", 00:24:13.748 "trsvcid": "34698", 00:24:13.748 "trtype": "TCP" 00:24:13.748 }, 00:24:13.748 "qid": 0, 00:24:13.748 "state": "enabled", 00:24:13.748 "thread": "nvmf_tgt_poll_group_000" 00:24:13.748 } 00:24:13.748 ]' 00:24:13.748 10:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:13.748 10:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:13.748 10:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:13.748 10:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:13.748 10:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:13.748 10:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:13.748 10:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:13.748 10:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:14.006 10:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:01:YzNiODQ3NTllNGRkNjkzZWNkZGIyOTVjZTkwMGI1OTNQ924+: --dhchap-ctrl-secret DHHC-1:02:YjczYTQ1MGVmY2EyYWFkNTM0NzE5OGNjOTk2ZDJjZmYyMjYyODliZDNhNzYxZmUxbh2Sew==: 00:24:14.571 10:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:14.571 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:14.571 10:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:24:14.572 10:05:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.572 10:05:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:14.572 10:05:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.572 10:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:14.572 10:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:14.572 10:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:14.831 10:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:24:14.831 10:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:14.831 10:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:14.831 10:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:24:14.831 10:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:24:14.831 10:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:14.831 10:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:14.831 10:05:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.831 10:05:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:14.831 10:05:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.831 10:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:14.831 10:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:15.398 00:24:15.398 10:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:15.398 10:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:15.398 10:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:15.656 10:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.656 10:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:15.656 10:05:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.656 10:05:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:15.656 10:05:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.656 10:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:15.656 { 00:24:15.656 "auth": { 00:24:15.656 "dhgroup": "ffdhe6144", 00:24:15.656 "digest": "sha512", 00:24:15.656 "state": "completed" 00:24:15.656 }, 00:24:15.656 "cntlid": 133, 00:24:15.656 "listen_address": { 00:24:15.656 "adrfam": "IPv4", 00:24:15.656 "traddr": "10.0.0.2", 00:24:15.656 "trsvcid": "4420", 00:24:15.656 "trtype": "TCP" 00:24:15.656 }, 00:24:15.656 "peer_address": { 00:24:15.656 "adrfam": "IPv4", 00:24:15.656 "traddr": "10.0.0.1", 00:24:15.656 "trsvcid": "34728", 00:24:15.656 "trtype": "TCP" 00:24:15.656 }, 00:24:15.656 "qid": 0, 00:24:15.656 "state": "enabled", 00:24:15.656 "thread": "nvmf_tgt_poll_group_000" 00:24:15.656 } 00:24:15.656 ]' 00:24:15.656 10:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:15.656 10:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:15.656 10:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:15.656 10:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:15.656 10:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:15.656 10:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:15.656 10:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:15.656 10:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:15.914 10:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:02:NjU0ZjNjOWFkMGViNDg5OTZjOGQxNGI0NjE3ZDIxZTVkYjZiMWI1ZTA1MDMyMzY4c2X49g==: --dhchap-ctrl-secret DHHC-1:01:OWI2YTRmODIwODQ4MGIxNTQzYjY0YTdkMWJlYzU0NzEP+U4Y: 00:24:16.478 10:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:16.478 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:16.478 10:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:24:16.478 10:05:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.478 10:05:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:16.478 10:05:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.478 10:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:16.478 10:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:16.478 10:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:16.736 10:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:24:16.736 10:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:16.736 10:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:16.736 10:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:24:16.736 10:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:16.736 10:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:16.736 10:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key3 00:24:16.736 10:05:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.736 10:05:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:16.736 10:05:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.736 10:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:16.736 10:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:17.302 00:24:17.302 10:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:17.302 10:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:17.302 10:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:17.302 10:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.302 10:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:17.302 10:05:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.302 10:05:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:17.302 10:05:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.302 10:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:17.302 { 00:24:17.302 "auth": { 00:24:17.302 "dhgroup": "ffdhe6144", 00:24:17.302 "digest": "sha512", 00:24:17.302 "state": "completed" 00:24:17.302 }, 00:24:17.302 "cntlid": 135, 00:24:17.302 "listen_address": { 00:24:17.302 "adrfam": "IPv4", 00:24:17.302 "traddr": "10.0.0.2", 00:24:17.302 "trsvcid": "4420", 00:24:17.302 "trtype": "TCP" 00:24:17.302 }, 00:24:17.302 "peer_address": { 00:24:17.302 "adrfam": "IPv4", 00:24:17.302 "traddr": "10.0.0.1", 00:24:17.303 "trsvcid": "34764", 00:24:17.303 "trtype": "TCP" 00:24:17.303 }, 00:24:17.303 "qid": 0, 00:24:17.303 "state": "enabled", 00:24:17.303 "thread": "nvmf_tgt_poll_group_000" 00:24:17.303 } 00:24:17.303 ]' 00:24:17.303 10:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:17.561 10:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:17.561 10:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:17.561 10:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:17.561 10:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:17.561 10:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:17.561 10:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:17.561 10:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:17.818 10:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:03:YzJiYjU3YjhkZjhmNTcwYjY2OGFkYmY4ZDc5MTFjOWZmMDUwZTM5OTA3MmQxZGE5Nzg2M2IzMjAxNDNhYjM4Mgzy6hM=: 00:24:18.391 10:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:18.391 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:18.391 10:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:24:18.391 10:05:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.391 10:05:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:18.391 10:05:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.391 10:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:24:18.391 10:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:18.391 10:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:18.391 10:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:18.662 10:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:24:18.662 10:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:18.662 10:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:18.662 10:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:24:18.662 10:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:18.662 10:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:18.662 10:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:18.662 10:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.662 10:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:18.662 10:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.662 10:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:18.662 10:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:19.230 00:24:19.230 10:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:19.230 10:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:19.230 10:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:19.488 10:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.488 10:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:19.488 10:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.488 10:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:19.488 10:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.488 10:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:19.488 { 00:24:19.488 "auth": { 00:24:19.488 "dhgroup": "ffdhe8192", 00:24:19.488 "digest": "sha512", 00:24:19.488 "state": "completed" 00:24:19.488 }, 00:24:19.488 "cntlid": 137, 00:24:19.488 "listen_address": { 00:24:19.488 "adrfam": "IPv4", 00:24:19.488 "traddr": "10.0.0.2", 00:24:19.488 "trsvcid": "4420", 00:24:19.488 "trtype": "TCP" 00:24:19.488 }, 00:24:19.488 "peer_address": { 00:24:19.488 "adrfam": "IPv4", 00:24:19.488 "traddr": "10.0.0.1", 00:24:19.488 "trsvcid": "38084", 00:24:19.488 "trtype": "TCP" 00:24:19.489 }, 00:24:19.489 "qid": 0, 00:24:19.489 "state": "enabled", 00:24:19.489 "thread": "nvmf_tgt_poll_group_000" 00:24:19.489 } 00:24:19.489 ]' 00:24:19.489 10:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:19.489 10:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:19.489 10:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:19.489 10:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:19.489 10:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:19.489 10:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:19.489 10:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:19.489 10:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:19.750 10:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:00:NWQxMzZmNWQwNzExZTJlMmFlZjYyZTRlZjNhY2ZkOGQ1N2U4NDExZWJlYjNhZmFl2ifiLg==: --dhchap-ctrl-secret DHHC-1:03:NTM0YWI3NDk4YjFmMzczMjJkYjJkNjdmZmE1ZDU2OTU0ZjAzNjM0MzUyOGFhMDc4MmQxMjliYjEyYmM0MTFmMrepI4o=: 00:24:20.320 10:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:20.320 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:20.320 10:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:24:20.320 10:05:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.320 10:05:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:20.320 10:05:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.320 10:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:20.320 10:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:20.320 10:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:20.579 10:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:24:20.579 10:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:20.579 10:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:20.579 10:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:24:20.579 10:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:24:20.579 10:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:20.579 10:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:20.579 10:05:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.579 10:05:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:20.579 10:05:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.579 10:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:20.579 10:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:21.148 00:24:21.148 10:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:21.148 10:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:21.148 10:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:21.407 10:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.407 10:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:21.407 10:05:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.407 10:05:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:21.407 10:05:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.407 10:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:21.407 { 00:24:21.407 "auth": { 00:24:21.407 "dhgroup": "ffdhe8192", 00:24:21.407 "digest": "sha512", 00:24:21.407 "state": "completed" 00:24:21.407 }, 00:24:21.407 "cntlid": 139, 00:24:21.407 "listen_address": { 00:24:21.407 "adrfam": "IPv4", 00:24:21.407 "traddr": "10.0.0.2", 00:24:21.407 "trsvcid": "4420", 00:24:21.407 "trtype": "TCP" 00:24:21.407 }, 00:24:21.407 "peer_address": { 00:24:21.407 "adrfam": "IPv4", 00:24:21.407 "traddr": "10.0.0.1", 00:24:21.407 "trsvcid": "38102", 00:24:21.407 "trtype": "TCP" 00:24:21.407 }, 00:24:21.407 "qid": 0, 00:24:21.407 "state": "enabled", 00:24:21.407 "thread": "nvmf_tgt_poll_group_000" 00:24:21.407 } 00:24:21.407 ]' 00:24:21.407 10:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:21.407 10:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:21.407 10:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:21.407 10:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:21.407 10:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:21.407 10:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:21.407 10:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:21.408 10:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:21.666 10:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:01:YzNiODQ3NTllNGRkNjkzZWNkZGIyOTVjZTkwMGI1OTNQ924+: --dhchap-ctrl-secret DHHC-1:02:YjczYTQ1MGVmY2EyYWFkNTM0NzE5OGNjOTk2ZDJjZmYyMjYyODliZDNhNzYxZmUxbh2Sew==: 00:24:22.235 10:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:22.235 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:22.235 10:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:24:22.235 10:05:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.235 10:05:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:22.235 10:05:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.235 10:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:22.235 10:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:22.235 10:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:22.494 10:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:24:22.494 10:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:22.494 10:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:22.494 10:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:24:22.494 10:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:24:22.494 10:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:22.494 10:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:22.494 10:05:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.494 10:05:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:22.494 10:05:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.494 10:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:22.494 10:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:23.062 00:24:23.062 10:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:23.062 10:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:23.062 10:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:23.321 10:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.321 10:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:23.321 10:05:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.321 10:05:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:23.321 10:05:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.321 10:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:23.321 { 00:24:23.321 "auth": { 00:24:23.321 "dhgroup": "ffdhe8192", 00:24:23.321 "digest": "sha512", 00:24:23.321 "state": "completed" 00:24:23.321 }, 00:24:23.321 "cntlid": 141, 00:24:23.321 "listen_address": { 00:24:23.321 "adrfam": "IPv4", 00:24:23.321 "traddr": "10.0.0.2", 00:24:23.321 "trsvcid": "4420", 00:24:23.321 "trtype": "TCP" 00:24:23.321 }, 00:24:23.321 "peer_address": { 00:24:23.321 "adrfam": "IPv4", 00:24:23.321 "traddr": "10.0.0.1", 00:24:23.321 "trsvcid": "38122", 00:24:23.321 "trtype": "TCP" 00:24:23.321 }, 00:24:23.321 "qid": 0, 00:24:23.321 "state": "enabled", 00:24:23.321 "thread": "nvmf_tgt_poll_group_000" 00:24:23.321 } 00:24:23.321 ]' 00:24:23.321 10:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:23.321 10:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:23.321 10:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:23.321 10:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:23.321 10:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:23.321 10:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:23.321 10:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:23.321 10:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:23.580 10:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:02:NjU0ZjNjOWFkMGViNDg5OTZjOGQxNGI0NjE3ZDIxZTVkYjZiMWI1ZTA1MDMyMzY4c2X49g==: --dhchap-ctrl-secret DHHC-1:01:OWI2YTRmODIwODQ4MGIxNTQzYjY0YTdkMWJlYzU0NzEP+U4Y: 00:24:24.149 10:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:24.149 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:24.149 10:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:24:24.149 10:05:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.149 10:05:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:24.149 10:05:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.149 10:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:24.149 10:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:24.149 10:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:24.408 10:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:24:24.408 10:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:24.408 10:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:24.408 10:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:24:24.408 10:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:24.408 10:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:24.408 10:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key3 00:24:24.408 10:05:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.408 10:05:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:24.408 10:05:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.408 10:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:24.408 10:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:24.996 00:24:24.996 10:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:24.996 10:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:24.996 10:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:25.256 10:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.256 10:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:25.256 10:05:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.256 10:05:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:25.256 10:05:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.256 10:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:25.256 { 00:24:25.256 "auth": { 00:24:25.256 "dhgroup": "ffdhe8192", 00:24:25.256 "digest": "sha512", 00:24:25.256 "state": "completed" 00:24:25.256 }, 00:24:25.256 "cntlid": 143, 00:24:25.256 "listen_address": { 00:24:25.256 "adrfam": "IPv4", 00:24:25.256 "traddr": "10.0.0.2", 00:24:25.256 "trsvcid": "4420", 00:24:25.256 "trtype": "TCP" 00:24:25.256 }, 00:24:25.256 "peer_address": { 00:24:25.256 "adrfam": "IPv4", 00:24:25.256 "traddr": "10.0.0.1", 00:24:25.256 "trsvcid": "38166", 00:24:25.256 "trtype": "TCP" 00:24:25.256 }, 00:24:25.256 "qid": 0, 00:24:25.256 "state": "enabled", 00:24:25.256 "thread": "nvmf_tgt_poll_group_000" 00:24:25.256 } 00:24:25.256 ]' 00:24:25.256 10:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:25.256 10:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:25.256 10:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:25.256 10:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:25.256 10:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:25.256 10:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:25.256 10:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:25.256 10:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:25.515 10:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:03:YzJiYjU3YjhkZjhmNTcwYjY2OGFkYmY4ZDc5MTFjOWZmMDUwZTM5OTA3MmQxZGE5Nzg2M2IzMjAxNDNhYjM4Mgzy6hM=: 00:24:26.085 10:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:26.085 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:26.085 10:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:24:26.085 10:05:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.085 10:05:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:26.085 10:05:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.085 10:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:24:26.085 10:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:24:26.085 10:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:24:26.085 10:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:26.085 10:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:26.085 10:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:26.344 10:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:24:26.344 10:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:26.344 10:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:26.344 10:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:24:26.344 10:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:26.344 10:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:26.344 10:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:26.344 10:05:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.344 10:05:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:26.344 10:05:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.344 10:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:26.344 10:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:26.911 00:24:26.912 10:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:26.912 10:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:26.912 10:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:27.171 10:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.171 10:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:27.171 10:05:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.171 10:05:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:27.171 10:05:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.171 10:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:27.171 { 00:24:27.171 "auth": { 00:24:27.171 "dhgroup": "ffdhe8192", 00:24:27.171 "digest": "sha512", 00:24:27.171 "state": "completed" 00:24:27.171 }, 00:24:27.171 "cntlid": 145, 00:24:27.171 "listen_address": { 00:24:27.171 "adrfam": "IPv4", 00:24:27.171 "traddr": "10.0.0.2", 00:24:27.171 "trsvcid": "4420", 00:24:27.171 "trtype": "TCP" 00:24:27.171 }, 00:24:27.171 "peer_address": { 00:24:27.171 "adrfam": "IPv4", 00:24:27.171 "traddr": "10.0.0.1", 00:24:27.171 "trsvcid": "38198", 00:24:27.171 "trtype": "TCP" 00:24:27.171 }, 00:24:27.171 "qid": 0, 00:24:27.171 "state": "enabled", 00:24:27.171 "thread": "nvmf_tgt_poll_group_000" 00:24:27.171 } 00:24:27.171 ]' 00:24:27.171 10:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:27.171 10:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:27.171 10:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:27.171 10:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:27.171 10:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:27.171 10:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:27.171 10:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:27.171 10:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:27.429 10:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:00:NWQxMzZmNWQwNzExZTJlMmFlZjYyZTRlZjNhY2ZkOGQ1N2U4NDExZWJlYjNhZmFl2ifiLg==: --dhchap-ctrl-secret DHHC-1:03:NTM0YWI3NDk4YjFmMzczMjJkYjJkNjdmZmE1ZDU2OTU0ZjAzNjM0MzUyOGFhMDc4MmQxMjliYjEyYmM0MTFmMrepI4o=: 00:24:27.997 10:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:27.997 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:27.997 10:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:24:27.997 10:05:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.997 10:05:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:27.997 10:05:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.997 10:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key1 00:24:27.997 10:05:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.997 10:05:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:27.997 10:05:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.997 10:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:24:27.997 10:05:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:24:27.997 10:05:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:24:27.997 10:05:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:24:27.997 10:05:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:27.997 10:05:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:24:27.997 10:05:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:27.997 10:05:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:24:27.997 10:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:24:28.564 2024/07/15 10:05:41 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:24:28.565 request: 00:24:28.565 { 00:24:28.565 "method": "bdev_nvme_attach_controller", 00:24:28.565 "params": { 00:24:28.565 "name": "nvme0", 00:24:28.565 "trtype": "tcp", 00:24:28.565 "traddr": "10.0.0.2", 00:24:28.565 "adrfam": "ipv4", 00:24:28.565 "trsvcid": "4420", 00:24:28.565 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:28.565 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec", 00:24:28.565 "prchk_reftag": false, 00:24:28.565 "prchk_guard": false, 00:24:28.565 "hdgst": false, 00:24:28.565 "ddgst": false, 00:24:28.565 "dhchap_key": "key2" 00:24:28.565 } 00:24:28.565 } 00:24:28.565 Got JSON-RPC error response 00:24:28.565 GoRPCClient: error on JSON-RPC call 00:24:28.565 10:05:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:24:28.565 10:05:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:28.565 10:05:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:28.565 10:05:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:28.565 10:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:24:28.565 10:05:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.565 10:05:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:28.565 10:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.565 10:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:28.565 10:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.565 10:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:28.565 10:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.565 10:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:28.565 10:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:24:28.565 10:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:28.565 10:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:24:28.565 10:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:28.565 10:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:24:28.565 10:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:28.565 10:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:28.565 10:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:29.132 2024/07/15 10:05:42 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:24:29.132 request: 00:24:29.132 { 00:24:29.132 "method": "bdev_nvme_attach_controller", 00:24:29.132 "params": { 00:24:29.132 "name": "nvme0", 00:24:29.132 "trtype": "tcp", 00:24:29.132 "traddr": "10.0.0.2", 00:24:29.132 "adrfam": "ipv4", 00:24:29.132 "trsvcid": "4420", 00:24:29.132 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:29.132 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec", 00:24:29.132 "prchk_reftag": false, 00:24:29.132 "prchk_guard": false, 00:24:29.132 "hdgst": false, 00:24:29.132 "ddgst": false, 00:24:29.132 "dhchap_key": "key1", 00:24:29.132 "dhchap_ctrlr_key": "ckey2" 00:24:29.132 } 00:24:29.132 } 00:24:29.132 Got JSON-RPC error response 00:24:29.132 GoRPCClient: error on JSON-RPC call 00:24:29.132 10:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:24:29.132 10:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:29.132 10:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:29.132 10:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:29.132 10:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:24:29.132 10:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.132 10:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:29.132 10:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.132 10:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key1 00:24:29.132 10:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.132 10:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:29.132 10:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.132 10:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:29.132 10:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:24:29.132 10:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:29.132 10:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:24:29.132 10:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:29.132 10:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:24:29.132 10:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:29.132 10:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:29.132 10:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:29.699 2024/07/15 10:05:43 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey1 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:24:29.699 request: 00:24:29.699 { 00:24:29.699 "method": "bdev_nvme_attach_controller", 00:24:29.699 "params": { 00:24:29.699 "name": "nvme0", 00:24:29.699 "trtype": "tcp", 00:24:29.699 "traddr": "10.0.0.2", 00:24:29.699 "adrfam": "ipv4", 00:24:29.699 "trsvcid": "4420", 00:24:29.699 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:29.699 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec", 00:24:29.699 "prchk_reftag": false, 00:24:29.699 "prchk_guard": false, 00:24:29.699 "hdgst": false, 00:24:29.699 "ddgst": false, 00:24:29.699 "dhchap_key": "key1", 00:24:29.699 "dhchap_ctrlr_key": "ckey1" 00:24:29.699 } 00:24:29.699 } 00:24:29.699 Got JSON-RPC error response 00:24:29.699 GoRPCClient: error on JSON-RPC call 00:24:29.699 10:05:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:24:29.699 10:05:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:29.699 10:05:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:29.699 10:05:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:29.699 10:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:24:29.699 10:05:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.699 10:05:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:29.699 10:05:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.700 10:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 78104 00:24:29.700 10:05:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 78104 ']' 00:24:29.700 10:05:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 78104 00:24:29.700 10:05:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:24:29.700 10:05:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:29.700 10:05:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78104 00:24:29.700 10:05:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:29.700 10:05:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:29.700 killing process with pid 78104 00:24:29.700 10:05:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78104' 00:24:29.700 10:05:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 78104 00:24:29.700 10:05:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 78104 00:24:29.958 10:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:24:29.958 10:05:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:29.958 10:05:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:29.958 10:05:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:29.958 10:05:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=82756 00:24:29.958 10:05:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 82756 00:24:29.958 10:05:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:24:29.958 10:05:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 82756 ']' 00:24:29.958 10:05:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:29.958 10:05:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:29.958 10:05:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:29.958 10:05:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:29.958 10:05:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:30.893 10:05:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:30.893 10:05:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:24:30.893 10:05:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:30.893 10:05:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:30.893 10:05:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:30.893 10:05:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:30.893 10:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:24:30.893 10:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 82756 00:24:30.893 10:05:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 82756 ']' 00:24:30.893 10:05:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:30.893 10:05:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:30.893 10:05:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:30.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:30.893 10:05:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:30.893 10:05:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:30.893 10:05:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:30.893 10:05:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:24:30.893 10:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:24:30.893 10:05:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.893 10:05:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:31.151 10:05:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.151 10:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:24:31.151 10:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:31.151 10:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:31.151 10:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:24:31.151 10:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:31.151 10:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:31.151 10:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key3 00:24:31.151 10:05:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.151 10:05:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:31.151 10:05:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.151 10:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:31.151 10:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:31.719 00:24:31.719 10:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:31.719 10:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:31.719 10:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:31.980 10:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.980 10:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:31.980 10:05:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.980 10:05:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:31.980 10:05:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.980 10:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:31.980 { 00:24:31.980 "auth": { 00:24:31.980 "dhgroup": "ffdhe8192", 00:24:31.980 "digest": "sha512", 00:24:31.980 "state": "completed" 00:24:31.980 }, 00:24:31.980 "cntlid": 1, 00:24:31.980 "listen_address": { 00:24:31.980 "adrfam": "IPv4", 00:24:31.980 "traddr": "10.0.0.2", 00:24:31.980 "trsvcid": "4420", 00:24:31.980 "trtype": "TCP" 00:24:31.980 }, 00:24:31.980 "peer_address": { 00:24:31.980 "adrfam": "IPv4", 00:24:31.980 "traddr": "10.0.0.1", 00:24:31.980 "trsvcid": "40304", 00:24:31.980 "trtype": "TCP" 00:24:31.980 }, 00:24:31.980 "qid": 0, 00:24:31.980 "state": "enabled", 00:24:31.980 "thread": "nvmf_tgt_poll_group_000" 00:24:31.980 } 00:24:31.980 ]' 00:24:31.980 10:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:31.980 10:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:31.980 10:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:31.980 10:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:31.980 10:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:31.980 10:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:31.980 10:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:31.980 10:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:32.238 10:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-secret DHHC-1:03:YzJiYjU3YjhkZjhmNTcwYjY2OGFkYmY4ZDc5MTFjOWZmMDUwZTM5OTA3MmQxZGE5Nzg2M2IzMjAxNDNhYjM4Mgzy6hM=: 00:24:32.809 10:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:32.809 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:32.809 10:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:24:32.809 10:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.809 10:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:32.809 10:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.809 10:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --dhchap-key key3 00:24:32.809 10:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.809 10:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:32.809 10:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.809 10:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:24:32.809 10:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:24:33.068 10:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:33.068 10:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:24:33.068 10:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:33.068 10:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:24:33.068 10:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:33.068 10:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:24:33.068 10:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:33.068 10:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:33.068 10:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:33.359 2024/07/15 10:05:46 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:24:33.359 request: 00:24:33.359 { 00:24:33.359 "method": "bdev_nvme_attach_controller", 00:24:33.359 "params": { 00:24:33.359 "name": "nvme0", 00:24:33.359 "trtype": "tcp", 00:24:33.359 "traddr": "10.0.0.2", 00:24:33.359 "adrfam": "ipv4", 00:24:33.359 "trsvcid": "4420", 00:24:33.359 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:33.359 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec", 00:24:33.359 "prchk_reftag": false, 00:24:33.359 "prchk_guard": false, 00:24:33.359 "hdgst": false, 00:24:33.359 "ddgst": false, 00:24:33.359 "dhchap_key": "key3" 00:24:33.359 } 00:24:33.359 } 00:24:33.359 Got JSON-RPC error response 00:24:33.359 GoRPCClient: error on JSON-RPC call 00:24:33.359 10:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:24:33.359 10:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:33.359 10:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:33.359 10:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:33.359 10:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:24:33.359 10:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:24:33.359 10:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:24:33.359 10:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:24:33.618 10:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:33.618 10:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:24:33.618 10:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:33.618 10:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:24:33.618 10:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:33.618 10:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:24:33.618 10:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:33.618 10:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:33.619 10:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:33.619 2024/07/15 10:05:47 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:24:33.619 request: 00:24:33.619 { 00:24:33.619 "method": "bdev_nvme_attach_controller", 00:24:33.619 "params": { 00:24:33.619 "name": "nvme0", 00:24:33.619 "trtype": "tcp", 00:24:33.619 "traddr": "10.0.0.2", 00:24:33.619 "adrfam": "ipv4", 00:24:33.619 "trsvcid": "4420", 00:24:33.619 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:33.619 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec", 00:24:33.619 "prchk_reftag": false, 00:24:33.619 "prchk_guard": false, 00:24:33.619 "hdgst": false, 00:24:33.619 "ddgst": false, 00:24:33.619 "dhchap_key": "key3" 00:24:33.619 } 00:24:33.619 } 00:24:33.619 Got JSON-RPC error response 00:24:33.619 GoRPCClient: error on JSON-RPC call 00:24:33.619 10:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:24:33.619 10:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:33.619 10:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:33.619 10:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:33.619 10:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:24:33.619 10:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:24:33.619 10:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:24:33.619 10:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:33.619 10:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:33.619 10:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:33.879 10:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:24:33.879 10:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.879 10:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:33.879 10:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.879 10:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:24:33.879 10:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.879 10:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:33.879 10:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.879 10:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:33.879 10:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:24:33.879 10:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:33.879 10:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:24:33.879 10:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:33.879 10:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:24:33.879 10:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:33.879 10:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:33.879 10:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:34.139 2024/07/15 10:05:47 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:key1 dhchap_key:key0 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:24:34.139 request: 00:24:34.139 { 00:24:34.139 "method": "bdev_nvme_attach_controller", 00:24:34.139 "params": { 00:24:34.139 "name": "nvme0", 00:24:34.139 "trtype": "tcp", 00:24:34.139 "traddr": "10.0.0.2", 00:24:34.139 "adrfam": "ipv4", 00:24:34.139 "trsvcid": "4420", 00:24:34.139 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:34.139 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec", 00:24:34.139 "prchk_reftag": false, 00:24:34.139 "prchk_guard": false, 00:24:34.139 "hdgst": false, 00:24:34.139 "ddgst": false, 00:24:34.139 "dhchap_key": "key0", 00:24:34.139 "dhchap_ctrlr_key": "key1" 00:24:34.139 } 00:24:34.139 } 00:24:34.139 Got JSON-RPC error response 00:24:34.139 GoRPCClient: error on JSON-RPC call 00:24:34.139 10:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:24:34.139 10:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:34.139 10:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:34.139 10:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:34.139 10:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:24:34.139 10:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:24:34.398 00:24:34.398 10:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:24:34.398 10:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:34.398 10:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:24:34.692 10:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.692 10:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:34.692 10:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:34.952 10:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:24:34.952 10:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:24:34.952 10:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 78148 00:24:34.952 10:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 78148 ']' 00:24:34.952 10:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 78148 00:24:34.952 10:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:24:34.952 10:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:34.952 10:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78148 00:24:34.952 10:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:34.952 10:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:34.952 10:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78148' 00:24:34.952 killing process with pid 78148 00:24:34.952 10:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 78148 00:24:34.952 10:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 78148 00:24:35.212 10:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:24:35.212 10:05:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:35.212 10:05:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:24:35.212 10:05:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:35.212 10:05:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:24:35.212 10:05:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:35.212 10:05:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:35.212 rmmod nvme_tcp 00:24:35.212 rmmod nvme_fabrics 00:24:35.212 rmmod nvme_keyring 00:24:35.212 10:05:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:35.212 10:05:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:24:35.212 10:05:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:24:35.212 10:05:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 82756 ']' 00:24:35.212 10:05:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 82756 00:24:35.212 10:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 82756 ']' 00:24:35.212 10:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 82756 00:24:35.212 10:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:24:35.212 10:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:35.212 10:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82756 00:24:35.212 10:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:35.212 10:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:35.212 killing process with pid 82756 00:24:35.212 10:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82756' 00:24:35.212 10:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 82756 00:24:35.212 10:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 82756 00:24:35.472 10:05:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:35.472 10:05:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:35.472 10:05:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:35.472 10:05:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:35.472 10:05:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:35.472 10:05:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:35.472 10:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:35.472 10:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:35.472 10:05:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:35.472 10:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.iaQ /tmp/spdk.key-sha256.6sw /tmp/spdk.key-sha384.wIw /tmp/spdk.key-sha512.qtK /tmp/spdk.key-sha512.gGO /tmp/spdk.key-sha384.ewW /tmp/spdk.key-sha256.uer '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:24:35.472 00:24:35.472 real 2m26.344s 00:24:35.472 user 5m51.564s 00:24:35.472 sys 0m19.564s 00:24:35.472 10:05:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:35.472 10:05:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:35.472 ************************************ 00:24:35.732 END TEST nvmf_auth_target 00:24:35.732 ************************************ 00:24:35.732 10:05:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:35.732 10:05:49 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:24:35.732 10:05:49 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:24:35.732 10:05:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:24:35.732 10:05:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:35.732 10:05:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:35.732 ************************************ 00:24:35.732 START TEST nvmf_bdevio_no_huge 00:24:35.732 ************************************ 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:24:35.732 * Looking for test storage... 00:24:35.732 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:35.732 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:35.993 Cannot find device "nvmf_tgt_br" 00:24:35.993 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:24:35.993 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:35.993 Cannot find device "nvmf_tgt_br2" 00:24:35.993 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:24:35.993 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:35.993 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:35.993 Cannot find device "nvmf_tgt_br" 00:24:35.993 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:24:35.993 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:35.993 Cannot find device "nvmf_tgt_br2" 00:24:35.993 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:24:35.993 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:35.993 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:35.993 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:35.993 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:35.993 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:24:35.993 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:35.993 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:35.993 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:24:35.993 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:35.993 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:35.993 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:35.993 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:35.993 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:35.993 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:35.993 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:35.993 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:35.993 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:35.993 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:35.993 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:35.993 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:35.993 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:35.993 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:35.993 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:35.993 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:35.993 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:35.993 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:36.253 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:36.253 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:36.253 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:36.253 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:36.253 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:36.253 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:36.253 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:36.253 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:24:36.253 00:24:36.253 --- 10.0.0.2 ping statistics --- 00:24:36.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:36.253 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:24:36.253 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:36.253 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:36.253 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:24:36.253 00:24:36.253 --- 10.0.0.3 ping statistics --- 00:24:36.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:36.253 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:24:36.253 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:36.253 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:36.253 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:24:36.253 00:24:36.253 --- 10.0.0.1 ping statistics --- 00:24:36.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:36.253 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:24:36.253 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:36.253 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:24:36.253 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:36.253 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:36.253 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:36.253 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:36.253 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:36.253 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:36.253 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:36.253 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:24:36.253 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:36.253 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:36.253 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:36.253 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=83153 00:24:36.253 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:24:36.253 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 83153 00:24:36.253 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 83153 ']' 00:24:36.253 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:36.253 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:36.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:36.253 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:36.253 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:36.253 10:05:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:36.253 [2024-07-15 10:05:49.715790] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:24:36.253 [2024-07-15 10:05:49.715867] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:24:36.513 [2024-07-15 10:05:49.849723] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:36.513 [2024-07-15 10:05:49.954247] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:36.513 [2024-07-15 10:05:49.954318] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:36.513 [2024-07-15 10:05:49.954324] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:36.513 [2024-07-15 10:05:49.954328] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:36.513 [2024-07-15 10:05:49.954332] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:36.513 [2024-07-15 10:05:49.954542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:24:36.513 [2024-07-15 10:05:49.954736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:24:36.513 [2024-07-15 10:05:49.954928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:36.513 [2024-07-15 10:05:49.954933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:24:37.090 10:05:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:37.090 10:05:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:24:37.090 10:05:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:37.090 10:05:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:37.090 10:05:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:37.090 10:05:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:37.090 10:05:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:37.090 10:05:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.090 10:05:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:37.090 [2024-07-15 10:05:50.634977] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:37.090 10:05:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.090 10:05:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:37.091 10:05:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.091 10:05:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:37.091 Malloc0 00:24:37.091 10:05:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.091 10:05:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:37.091 10:05:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.091 10:05:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:37.091 10:05:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.091 10:05:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:37.091 10:05:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.091 10:05:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:37.350 10:05:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.350 10:05:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:37.350 10:05:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.350 10:05:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:37.350 [2024-07-15 10:05:50.687617] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:37.350 10:05:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.350 10:05:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:24:37.350 10:05:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:24:37.350 10:05:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:24:37.350 10:05:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:24:37.350 10:05:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:37.350 10:05:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:37.350 { 00:24:37.350 "params": { 00:24:37.350 "name": "Nvme$subsystem", 00:24:37.350 "trtype": "$TEST_TRANSPORT", 00:24:37.350 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:37.350 "adrfam": "ipv4", 00:24:37.350 "trsvcid": "$NVMF_PORT", 00:24:37.350 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:37.350 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:37.350 "hdgst": ${hdgst:-false}, 00:24:37.350 "ddgst": ${ddgst:-false} 00:24:37.350 }, 00:24:37.350 "method": "bdev_nvme_attach_controller" 00:24:37.350 } 00:24:37.350 EOF 00:24:37.350 )") 00:24:37.350 10:05:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:24:37.350 10:05:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:24:37.350 10:05:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:24:37.350 10:05:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:37.350 "params": { 00:24:37.350 "name": "Nvme1", 00:24:37.350 "trtype": "tcp", 00:24:37.350 "traddr": "10.0.0.2", 00:24:37.350 "adrfam": "ipv4", 00:24:37.350 "trsvcid": "4420", 00:24:37.350 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:37.350 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:37.350 "hdgst": false, 00:24:37.350 "ddgst": false 00:24:37.350 }, 00:24:37.350 "method": "bdev_nvme_attach_controller" 00:24:37.350 }' 00:24:37.350 [2024-07-15 10:05:50.744750] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:24:37.350 [2024-07-15 10:05:50.744823] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid83206 ] 00:24:37.350 [2024-07-15 10:05:50.874644] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:37.608 [2024-07-15 10:05:50.986212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:37.608 [2024-07-15 10:05:50.986404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:37.608 [2024-07-15 10:05:50.986422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:37.608 I/O targets: 00:24:37.608 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:24:37.608 00:24:37.608 00:24:37.608 CUnit - A unit testing framework for C - Version 2.1-3 00:24:37.608 http://cunit.sourceforge.net/ 00:24:37.608 00:24:37.608 00:24:37.608 Suite: bdevio tests on: Nvme1n1 00:24:37.867 Test: blockdev write read block ...passed 00:24:37.867 Test: blockdev write zeroes read block ...passed 00:24:37.867 Test: blockdev write zeroes read no split ...passed 00:24:37.867 Test: blockdev write zeroes read split ...passed 00:24:37.867 Test: blockdev write zeroes read split partial ...passed 00:24:37.867 Test: blockdev reset ...[2024-07-15 10:05:51.291151] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.867 [2024-07-15 10:05:51.291257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f3460 (9): Bad file descriptor 00:24:37.867 passed 00:24:37.867 Test: blockdev write read 8 blocks ...[2024-07-15 10:05:51.304413] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:37.867 passed 00:24:37.867 Test: blockdev write read size > 128k ...passed 00:24:37.867 Test: blockdev write read invalid size ...passed 00:24:37.867 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:37.867 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:37.867 Test: blockdev write read max offset ...passed 00:24:37.867 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:37.867 Test: blockdev writev readv 8 blocks ...passed 00:24:37.867 Test: blockdev writev readv 30 x 1block ...passed 00:24:38.126 Test: blockdev writev readv block ...passed 00:24:38.126 Test: blockdev writev readv size > 128k ...passed 00:24:38.126 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:38.126 Test: blockdev comparev and writev ...[2024-07-15 10:05:51.476594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:38.126 [2024-07-15 10:05:51.476726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:38.126 [2024-07-15 10:05:51.476785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:38.126 [2024-07-15 10:05:51.476839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.126 [2024-07-15 10:05:51.477150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:38.126 [2024-07-15 10:05:51.477205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:38.126 [2024-07-15 10:05:51.477258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:38.126 [2024-07-15 10:05:51.477305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:38.126 [2024-07-15 10:05:51.477598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:38.126 [2024-07-15 10:05:51.477649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:38.126 [2024-07-15 10:05:51.477738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:38.126 [2024-07-15 10:05:51.477787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:38.126 [2024-07-15 10:05:51.478089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:38.126 [2024-07-15 10:05:51.478138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:38.126 [2024-07-15 10:05:51.478195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:38.126 [2024-07-15 10:05:51.478250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:38.126 passed 00:24:38.126 Test: blockdev nvme passthru rw ...passed 00:24:38.126 Test: blockdev nvme passthru vendor specific ...[2024-07-15 10:05:51.560096] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:38.126 [2024-07-15 10:05:51.560209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:38.126 [2024-07-15 10:05:51.560395] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:38.126 [2024-07-15 10:05:51.560458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:38.126 [2024-07-15 10:05:51.560614] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:38.126 [2024-07-15 10:05:51.560669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:38.126 [2024-07-15 10:05:51.560814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:38.126 [2024-07-15 10:05:51.560860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:38.126 passed 00:24:38.126 Test: blockdev nvme admin passthru ...passed 00:24:38.126 Test: blockdev copy ...passed 00:24:38.126 00:24:38.126 Run Summary: Type Total Ran Passed Failed Inactive 00:24:38.126 suites 1 1 n/a 0 0 00:24:38.126 tests 23 23 23 0 0 00:24:38.126 asserts 152 152 152 0 n/a 00:24:38.126 00:24:38.126 Elapsed time = 0.940 seconds 00:24:38.384 10:05:51 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:38.384 10:05:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.384 10:05:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:38.384 10:05:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.384 10:05:51 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:24:38.384 10:05:51 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:24:38.384 10:05:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:38.384 10:05:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:24:38.642 10:05:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:38.642 10:05:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:24:38.642 10:05:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:38.642 10:05:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:38.642 rmmod nvme_tcp 00:24:38.642 rmmod nvme_fabrics 00:24:38.642 rmmod nvme_keyring 00:24:38.642 10:05:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:38.643 10:05:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:24:38.643 10:05:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:24:38.643 10:05:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 83153 ']' 00:24:38.643 10:05:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 83153 00:24:38.643 10:05:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 83153 ']' 00:24:38.643 10:05:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 83153 00:24:38.643 10:05:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:24:38.643 10:05:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:38.643 10:05:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83153 00:24:38.643 killing process with pid 83153 00:24:38.643 10:05:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:24:38.643 10:05:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:24:38.643 10:05:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83153' 00:24:38.643 10:05:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 83153 00:24:38.643 10:05:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 83153 00:24:38.902 10:05:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:38.902 10:05:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:38.902 10:05:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:38.902 10:05:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:38.902 10:05:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:38.902 10:05:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:38.902 10:05:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:38.902 10:05:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:38.902 10:05:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:38.902 00:24:38.902 real 0m3.347s 00:24:38.902 user 0m11.724s 00:24:38.902 sys 0m1.208s 00:24:38.902 ************************************ 00:24:38.902 END TEST nvmf_bdevio_no_huge 00:24:38.902 ************************************ 00:24:38.902 10:05:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:38.902 10:05:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:39.161 10:05:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:39.161 10:05:52 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:24:39.161 10:05:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:39.161 10:05:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:39.161 10:05:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:39.161 ************************************ 00:24:39.161 START TEST nvmf_tls 00:24:39.161 ************************************ 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:24:39.161 * Looking for test storage... 00:24:39.161 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:39.161 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:39.421 Cannot find device "nvmf_tgt_br" 00:24:39.421 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # true 00:24:39.421 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:39.421 Cannot find device "nvmf_tgt_br2" 00:24:39.421 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # true 00:24:39.421 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:39.421 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:39.421 Cannot find device "nvmf_tgt_br" 00:24:39.421 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # true 00:24:39.421 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:39.421 Cannot find device "nvmf_tgt_br2" 00:24:39.421 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # true 00:24:39.421 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:39.421 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:39.421 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:39.421 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:39.421 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # true 00:24:39.421 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:39.421 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:39.421 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # true 00:24:39.421 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:39.421 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:39.421 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:39.421 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:39.421 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:39.421 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:39.421 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:39.421 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:39.421 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:39.421 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:39.421 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:39.421 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:39.421 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:39.421 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:39.421 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:39.421 10:05:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:39.421 10:05:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:39.681 10:05:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:39.681 10:05:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:39.681 10:05:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:39.681 10:05:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:39.681 10:05:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:39.681 10:05:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:39.681 10:05:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:39.681 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:39.681 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:24:39.681 00:24:39.681 --- 10.0.0.2 ping statistics --- 00:24:39.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.681 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:24:39.681 10:05:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:39.681 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:39.681 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:24:39.681 00:24:39.681 --- 10.0.0.3 ping statistics --- 00:24:39.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.681 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:24:39.681 10:05:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:39.681 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:39.681 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:24:39.681 00:24:39.681 --- 10.0.0.1 ping statistics --- 00:24:39.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.681 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:24:39.681 10:05:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:39.681 10:05:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:24:39.681 10:05:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:39.681 10:05:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:39.681 10:05:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:39.681 10:05:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:39.681 10:05:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:39.681 10:05:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:39.681 10:05:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:39.681 10:05:53 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:24:39.681 10:05:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:39.681 10:05:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:39.681 10:05:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:39.681 10:05:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=83388 00:24:39.681 10:05:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:24:39.681 10:05:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 83388 00:24:39.681 10:05:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 83388 ']' 00:24:39.681 10:05:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:39.681 10:05:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:39.681 10:05:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:39.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:39.681 10:05:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:39.681 10:05:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:39.681 [2024-07-15 10:05:53.163822] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:24:39.681 [2024-07-15 10:05:53.163894] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:39.939 [2024-07-15 10:05:53.305968] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:39.939 [2024-07-15 10:05:53.408442] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:39.939 [2024-07-15 10:05:53.408490] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:39.939 [2024-07-15 10:05:53.408497] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:39.939 [2024-07-15 10:05:53.408501] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:39.939 [2024-07-15 10:05:53.408505] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:39.939 [2024-07-15 10:05:53.408525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:40.506 10:05:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:40.506 10:05:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:40.506 10:05:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:40.506 10:05:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:40.506 10:05:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:40.506 10:05:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:40.506 10:05:54 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:24:40.506 10:05:54 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:24:40.764 true 00:24:40.764 10:05:54 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:40.764 10:05:54 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:24:41.035 10:05:54 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:24:41.035 10:05:54 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:24:41.035 10:05:54 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:24:41.318 10:05:54 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:41.318 10:05:54 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:24:41.577 10:05:54 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:24:41.577 10:05:54 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:24:41.577 10:05:54 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:24:41.577 10:05:55 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:24:41.577 10:05:55 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:41.870 10:05:55 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:24:41.870 10:05:55 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:24:41.870 10:05:55 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:41.870 10:05:55 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:24:42.128 10:05:55 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:24:42.128 10:05:55 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:24:42.128 10:05:55 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:24:42.386 10:05:55 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:24:42.386 10:05:55 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:42.645 10:05:55 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:24:42.645 10:05:55 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:24:42.645 10:05:55 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:24:42.645 10:05:56 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:24:42.645 10:05:56 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:42.904 10:05:56 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:24:42.904 10:05:56 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:24:42.904 10:05:56 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:24:42.904 10:05:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:24:42.904 10:05:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:24:42.904 10:05:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:42.904 10:05:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:24:42.904 10:05:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:24:42.904 10:05:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:24:42.904 10:05:56 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:42.904 10:05:56 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:24:42.904 10:05:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:24:42.904 10:05:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:24:42.904 10:05:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:42.904 10:05:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:24:42.904 10:05:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:24:42.904 10:05:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:24:42.904 10:05:56 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:24:43.163 10:05:56 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:24:43.163 10:05:56 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.xrzdhqAweF 00:24:43.163 10:05:56 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:24:43.163 10:05:56 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.TpEooliSSY 00:24:43.163 10:05:56 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:43.163 10:05:56 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:24:43.163 10:05:56 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.xrzdhqAweF 00:24:43.163 10:05:56 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.TpEooliSSY 00:24:43.163 10:05:56 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:24:43.163 10:05:56 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:24:43.423 10:05:56 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.xrzdhqAweF 00:24:43.423 10:05:56 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.xrzdhqAweF 00:24:43.423 10:05:56 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:43.682 [2024-07-15 10:05:57.147466] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:43.682 10:05:57 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:43.941 10:05:57 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:44.201 [2024-07-15 10:05:57.526767] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:44.201 [2024-07-15 10:05:57.526953] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:44.201 10:05:57 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:44.201 malloc0 00:24:44.201 10:05:57 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:44.460 10:05:57 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xrzdhqAweF 00:24:44.720 [2024-07-15 10:05:58.110367] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:44.720 10:05:58 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.xrzdhqAweF 00:24:56.933 Initializing NVMe Controllers 00:24:56.933 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:56.933 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:56.933 Initialization complete. Launching workers. 00:24:56.933 ======================================================== 00:24:56.933 Latency(us) 00:24:56.933 Device Information : IOPS MiB/s Average min max 00:24:56.933 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15641.39 61.10 4092.18 1014.85 5237.05 00:24:56.933 ======================================================== 00:24:56.933 Total : 15641.39 61.10 4092.18 1014.85 5237.05 00:24:56.933 00:24:56.933 10:06:08 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.xrzdhqAweF 00:24:56.933 10:06:08 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:56.933 10:06:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:56.933 10:06:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:56.933 10:06:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.xrzdhqAweF' 00:24:56.933 10:06:08 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:56.933 10:06:08 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:56.933 10:06:08 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83735 00:24:56.933 10:06:08 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:56.933 10:06:08 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83735 /var/tmp/bdevperf.sock 00:24:56.933 10:06:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 83735 ']' 00:24:56.933 10:06:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:56.933 10:06:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:56.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:56.933 10:06:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:56.933 10:06:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:56.933 10:06:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:56.933 [2024-07-15 10:06:08.347131] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:24:56.933 [2024-07-15 10:06:08.347239] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83735 ] 00:24:56.933 [2024-07-15 10:06:08.473337] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:56.933 [2024-07-15 10:06:08.579340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:56.933 10:06:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:56.933 10:06:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:56.933 10:06:09 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xrzdhqAweF 00:24:56.933 [2024-07-15 10:06:09.434612] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:56.933 [2024-07-15 10:06:09.434721] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:56.933 TLSTESTn1 00:24:56.933 10:06:09 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:56.933 Running I/O for 10 seconds... 00:25:06.917 00:25:06.917 Latency(us) 00:25:06.918 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:06.918 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:06.918 Verification LBA range: start 0x0 length 0x2000 00:25:06.918 TLSTESTn1 : 10.01 6147.13 24.01 0.00 0.00 20787.94 4550.32 15568.38 00:25:06.918 =================================================================================================================== 00:25:06.918 Total : 6147.13 24.01 0.00 0.00 20787.94 4550.32 15568.38 00:25:06.918 0 00:25:06.918 10:06:19 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:06.918 10:06:19 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 83735 00:25:06.918 10:06:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 83735 ']' 00:25:06.918 10:06:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 83735 00:25:06.918 10:06:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:06.918 10:06:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:06.918 10:06:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83735 00:25:06.918 10:06:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:25:06.918 10:06:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:25:06.918 killing process with pid 83735 00:25:06.918 10:06:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83735' 00:25:06.918 10:06:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 83735 00:25:06.918 Received shutdown signal, test time was about 10.000000 seconds 00:25:06.918 00:25:06.918 Latency(us) 00:25:06.918 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:06.918 =================================================================================================================== 00:25:06.918 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:06.918 [2024-07-15 10:06:19.686187] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:06.918 10:06:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 83735 00:25:06.918 10:06:19 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.TpEooliSSY 00:25:06.918 10:06:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:25:06.918 10:06:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.TpEooliSSY 00:25:06.918 10:06:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:25:06.918 10:06:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:06.918 10:06:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:25:06.918 10:06:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:06.918 10:06:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.TpEooliSSY 00:25:06.918 10:06:19 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:06.918 10:06:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:06.918 10:06:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:06.918 10:06:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.TpEooliSSY' 00:25:06.918 10:06:19 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:06.918 10:06:19 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83887 00:25:06.918 10:06:19 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:06.918 10:06:19 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:06.918 10:06:19 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83887 /var/tmp/bdevperf.sock 00:25:06.918 10:06:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 83887 ']' 00:25:06.918 10:06:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:06.918 10:06:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:06.918 10:06:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:06.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:06.918 10:06:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:06.918 10:06:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:06.918 [2024-07-15 10:06:19.936460] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:25:06.918 [2024-07-15 10:06:19.937030] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83887 ] 00:25:06.918 [2024-07-15 10:06:20.073069] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.918 [2024-07-15 10:06:20.179528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:07.485 10:06:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:07.485 10:06:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:07.485 10:06:20 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.TpEooliSSY 00:25:07.485 [2024-07-15 10:06:20.971130] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:07.485 [2024-07-15 10:06:20.971317] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:07.485 [2024-07-15 10:06:20.976055] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:07.485 [2024-07-15 10:06:20.976696] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfcca0 (107): Transport endpoint is not connected 00:25:07.485 [2024-07-15 10:06:20.977683] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfcca0 (9): Bad file descriptor 00:25:07.485 [2024-07-15 10:06:20.978680] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:07.485 [2024-07-15 10:06:20.978727] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:25:07.485 [2024-07-15 10:06:20.978765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:07.485 2024/07/15 10:06:20 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.TpEooliSSY subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:25:07.485 request: 00:25:07.485 { 00:25:07.485 "method": "bdev_nvme_attach_controller", 00:25:07.485 "params": { 00:25:07.485 "name": "TLSTEST", 00:25:07.485 "trtype": "tcp", 00:25:07.485 "traddr": "10.0.0.2", 00:25:07.485 "adrfam": "ipv4", 00:25:07.485 "trsvcid": "4420", 00:25:07.485 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:07.485 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:07.485 "prchk_reftag": false, 00:25:07.485 "prchk_guard": false, 00:25:07.485 "hdgst": false, 00:25:07.485 "ddgst": false, 00:25:07.485 "psk": "/tmp/tmp.TpEooliSSY" 00:25:07.485 } 00:25:07.485 } 00:25:07.485 Got JSON-RPC error response 00:25:07.485 GoRPCClient: error on JSON-RPC call 00:25:07.485 10:06:20 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 83887 00:25:07.485 10:06:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 83887 ']' 00:25:07.485 10:06:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 83887 00:25:07.485 10:06:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:07.485 10:06:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:07.485 10:06:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83887 00:25:07.485 killing process with pid 83887 00:25:07.485 Received shutdown signal, test time was about 10.000000 seconds 00:25:07.485 00:25:07.485 Latency(us) 00:25:07.485 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:07.485 =================================================================================================================== 00:25:07.485 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:07.485 10:06:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:25:07.486 10:06:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:25:07.486 10:06:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83887' 00:25:07.486 10:06:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 83887 00:25:07.486 [2024-07-15 10:06:21.026964] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:07.486 10:06:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 83887 00:25:07.744 10:06:21 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:25:07.744 10:06:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:25:07.744 10:06:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:07.744 10:06:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:07.744 10:06:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:07.744 10:06:21 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.xrzdhqAweF 00:25:07.744 10:06:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:25:07.744 10:06:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.xrzdhqAweF 00:25:07.744 10:06:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:25:07.744 10:06:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:07.744 10:06:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:25:07.744 10:06:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:07.744 10:06:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.xrzdhqAweF 00:25:07.744 10:06:21 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:07.744 10:06:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:07.744 10:06:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:25:07.744 10:06:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.xrzdhqAweF' 00:25:07.744 10:06:21 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:07.744 10:06:21 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83932 00:25:07.744 10:06:21 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:07.744 10:06:21 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:07.744 10:06:21 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83932 /var/tmp/bdevperf.sock 00:25:07.744 10:06:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 83932 ']' 00:25:07.744 10:06:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:07.744 10:06:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:07.744 10:06:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:07.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:07.744 10:06:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:07.744 10:06:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:07.744 [2024-07-15 10:06:21.270539] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:25:07.744 [2024-07-15 10:06:21.270711] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83932 ] 00:25:08.002 [2024-07-15 10:06:21.407542] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.002 [2024-07-15 10:06:21.511006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:08.570 10:06:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:08.570 10:06:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:08.570 10:06:22 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.xrzdhqAweF 00:25:08.829 [2024-07-15 10:06:22.318280] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:08.829 [2024-07-15 10:06:22.318380] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:08.829 [2024-07-15 10:06:22.322876] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:25:08.829 [2024-07-15 10:06:22.322912] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:25:08.829 [2024-07-15 10:06:22.322958] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:08.829 [2024-07-15 10:06:22.323630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21e4ca0 (107): Transport endpoint is not connected 00:25:08.829 [2024-07-15 10:06:22.324617] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21e4ca0 (9): Bad file descriptor 00:25:08.829 [2024-07-15 10:06:22.325613] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:08.829 [2024-07-15 10:06:22.325634] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:25:08.829 [2024-07-15 10:06:22.325643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:08.829 2024/07/15 10:06:22 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.xrzdhqAweF subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:25:08.829 request: 00:25:08.829 { 00:25:08.829 "method": "bdev_nvme_attach_controller", 00:25:08.830 "params": { 00:25:08.830 "name": "TLSTEST", 00:25:08.830 "trtype": "tcp", 00:25:08.830 "traddr": "10.0.0.2", 00:25:08.830 "adrfam": "ipv4", 00:25:08.830 "trsvcid": "4420", 00:25:08.830 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:08.830 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:08.830 "prchk_reftag": false, 00:25:08.830 "prchk_guard": false, 00:25:08.830 "hdgst": false, 00:25:08.830 "ddgst": false, 00:25:08.830 "psk": "/tmp/tmp.xrzdhqAweF" 00:25:08.830 } 00:25:08.830 } 00:25:08.830 Got JSON-RPC error response 00:25:08.830 GoRPCClient: error on JSON-RPC call 00:25:08.830 10:06:22 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 83932 00:25:08.830 10:06:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 83932 ']' 00:25:08.830 10:06:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 83932 00:25:08.830 10:06:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:08.830 10:06:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:08.830 10:06:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83932 00:25:08.830 10:06:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:25:08.830 10:06:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:25:08.830 killing process with pid 83932 00:25:08.830 10:06:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83932' 00:25:08.830 10:06:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 83932 00:25:08.830 Received shutdown signal, test time was about 10.000000 seconds 00:25:08.830 00:25:08.830 Latency(us) 00:25:08.830 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:08.830 =================================================================================================================== 00:25:08.830 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:08.830 [2024-07-15 10:06:22.371763] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:08.830 10:06:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 83932 00:25:09.088 10:06:22 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:25:09.088 10:06:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:25:09.088 10:06:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:09.089 10:06:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:09.089 10:06:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:09.089 10:06:22 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.xrzdhqAweF 00:25:09.089 10:06:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:25:09.089 10:06:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.xrzdhqAweF 00:25:09.089 10:06:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:25:09.089 10:06:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:09.089 10:06:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:25:09.089 10:06:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:09.089 10:06:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.xrzdhqAweF 00:25:09.089 10:06:22 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:09.089 10:06:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:25:09.089 10:06:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:09.089 10:06:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.xrzdhqAweF' 00:25:09.089 10:06:22 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:09.089 10:06:22 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:09.089 10:06:22 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83972 00:25:09.089 10:06:22 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:09.089 10:06:22 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83972 /var/tmp/bdevperf.sock 00:25:09.089 10:06:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 83972 ']' 00:25:09.089 10:06:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:09.089 10:06:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:09.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:09.089 10:06:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:09.089 10:06:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:09.089 10:06:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:09.089 [2024-07-15 10:06:22.592879] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:25:09.089 [2024-07-15 10:06:22.592948] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83972 ] 00:25:09.348 [2024-07-15 10:06:22.718931] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:09.348 [2024-07-15 10:06:22.825283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:09.914 10:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:09.914 10:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:09.914 10:06:23 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xrzdhqAweF 00:25:10.174 [2024-07-15 10:06:23.657864] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:10.174 [2024-07-15 10:06:23.657958] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:10.174 [2024-07-15 10:06:23.662451] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:25:10.174 [2024-07-15 10:06:23.662490] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:25:10.174 [2024-07-15 10:06:23.662536] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:10.174 [2024-07-15 10:06:23.663207] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0bca0 (107): Transport endpoint is not connected 00:25:10.174 [2024-07-15 10:06:23.664192] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0bca0 (9): Bad file descriptor 00:25:10.174 [2024-07-15 10:06:23.665188] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:25:10.174 [2024-07-15 10:06:23.665208] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:25:10.174 [2024-07-15 10:06:23.665219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:25:10.174 2024/07/15 10:06:23 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.xrzdhqAweF subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:25:10.174 request: 00:25:10.174 { 00:25:10.174 "method": "bdev_nvme_attach_controller", 00:25:10.174 "params": { 00:25:10.174 "name": "TLSTEST", 00:25:10.174 "trtype": "tcp", 00:25:10.174 "traddr": "10.0.0.2", 00:25:10.174 "adrfam": "ipv4", 00:25:10.174 "trsvcid": "4420", 00:25:10.174 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:10.174 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:10.174 "prchk_reftag": false, 00:25:10.174 "prchk_guard": false, 00:25:10.174 "hdgst": false, 00:25:10.174 "ddgst": false, 00:25:10.174 "psk": "/tmp/tmp.xrzdhqAweF" 00:25:10.174 } 00:25:10.174 } 00:25:10.174 Got JSON-RPC error response 00:25:10.174 GoRPCClient: error on JSON-RPC call 00:25:10.174 10:06:23 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 83972 00:25:10.174 10:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 83972 ']' 00:25:10.174 10:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 83972 00:25:10.174 10:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:10.174 10:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:10.174 10:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83972 00:25:10.174 killing process with pid 83972 00:25:10.174 Received shutdown signal, test time was about 10.000000 seconds 00:25:10.174 00:25:10.174 Latency(us) 00:25:10.174 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:10.174 =================================================================================================================== 00:25:10.174 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:10.174 10:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:25:10.174 10:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:25:10.174 10:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83972' 00:25:10.174 10:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 83972 00:25:10.174 [2024-07-15 10:06:23.722359] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:10.174 10:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 83972 00:25:10.433 10:06:23 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:25:10.433 10:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:25:10.433 10:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:10.433 10:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:10.433 10:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:10.433 10:06:23 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:25:10.433 10:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:25:10.433 10:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:25:10.433 10:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:25:10.433 10:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:10.433 10:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:25:10.433 10:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:10.433 10:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:25:10.433 10:06:23 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:10.433 10:06:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:10.433 10:06:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:10.433 10:06:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:25:10.433 10:06:23 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:10.434 10:06:23 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84013 00:25:10.434 10:06:23 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:10.434 10:06:23 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:10.434 10:06:23 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84013 /var/tmp/bdevperf.sock 00:25:10.434 10:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84013 ']' 00:25:10.434 10:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:10.434 10:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:10.434 10:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:10.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:10.434 10:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:10.434 10:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:10.434 [2024-07-15 10:06:23.974830] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:25:10.434 [2024-07-15 10:06:23.974932] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84013 ] 00:25:10.692 [2024-07-15 10:06:24.120746] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:10.692 [2024-07-15 10:06:24.226372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:11.261 10:06:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:11.261 10:06:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:11.261 10:06:24 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:25:11.520 [2024-07-15 10:06:25.027649] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:11.520 [2024-07-15 10:06:25.029330] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xee6240 (9): Bad file descriptor 00:25:11.520 [2024-07-15 10:06:25.030322] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.520 [2024-07-15 10:06:25.030344] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:25:11.520 [2024-07-15 10:06:25.030353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.520 2024/07/15 10:06:25 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:25:11.520 request: 00:25:11.520 { 00:25:11.520 "method": "bdev_nvme_attach_controller", 00:25:11.520 "params": { 00:25:11.520 "name": "TLSTEST", 00:25:11.520 "trtype": "tcp", 00:25:11.520 "traddr": "10.0.0.2", 00:25:11.520 "adrfam": "ipv4", 00:25:11.520 "trsvcid": "4420", 00:25:11.520 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:11.520 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:11.520 "prchk_reftag": false, 00:25:11.520 "prchk_guard": false, 00:25:11.520 "hdgst": false, 00:25:11.520 "ddgst": false 00:25:11.520 } 00:25:11.520 } 00:25:11.520 Got JSON-RPC error response 00:25:11.520 GoRPCClient: error on JSON-RPC call 00:25:11.520 10:06:25 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 84013 00:25:11.520 10:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84013 ']' 00:25:11.520 10:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84013 00:25:11.520 10:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:11.520 10:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:11.520 10:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84013 00:25:11.520 killing process with pid 84013 00:25:11.520 Received shutdown signal, test time was about 10.000000 seconds 00:25:11.520 00:25:11.520 Latency(us) 00:25:11.520 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:11.520 =================================================================================================================== 00:25:11.520 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:11.520 10:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:25:11.520 10:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:25:11.520 10:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84013' 00:25:11.520 10:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84013 00:25:11.520 10:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84013 00:25:11.779 10:06:25 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:25:11.779 10:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:25:11.779 10:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:11.779 10:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:11.779 10:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:11.779 10:06:25 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 83388 00:25:11.779 10:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 83388 ']' 00:25:11.779 10:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 83388 00:25:11.779 10:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:11.779 10:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:11.779 10:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83388 00:25:11.779 killing process with pid 83388 00:25:11.779 10:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:11.780 10:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:11.780 10:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83388' 00:25:11.780 10:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 83388 00:25:11.780 [2024-07-15 10:06:25.299325] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:11.780 10:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 83388 00:25:12.038 10:06:25 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:25:12.038 10:06:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:25:12.038 10:06:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:25:12.038 10:06:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:25:12.038 10:06:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:25:12.038 10:06:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:25:12.038 10:06:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:25:12.038 10:06:25 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:25:12.038 10:06:25 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:25:12.038 10:06:25 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.9pVHJ6M57s 00:25:12.038 10:06:25 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:25:12.038 10:06:25 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.9pVHJ6M57s 00:25:12.038 10:06:25 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:25:12.038 10:06:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:12.038 10:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:12.038 10:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:12.038 10:06:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84069 00:25:12.038 10:06:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:12.038 10:06:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84069 00:25:12.038 10:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84069 ']' 00:25:12.038 10:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:12.038 10:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:12.038 10:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:12.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:12.038 10:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:12.038 10:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:12.297 [2024-07-15 10:06:25.627238] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:25:12.297 [2024-07-15 10:06:25.627302] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:12.297 [2024-07-15 10:06:25.764898] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:12.297 [2024-07-15 10:06:25.870202] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:12.297 [2024-07-15 10:06:25.870252] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:12.297 [2024-07-15 10:06:25.870258] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:12.297 [2024-07-15 10:06:25.870263] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:12.297 [2024-07-15 10:06:25.870266] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:12.297 [2024-07-15 10:06:25.870284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:13.234 10:06:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:13.234 10:06:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:13.234 10:06:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:13.234 10:06:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:13.234 10:06:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:13.234 10:06:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:13.234 10:06:26 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.9pVHJ6M57s 00:25:13.234 10:06:26 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.9pVHJ6M57s 00:25:13.234 10:06:26 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:13.235 [2024-07-15 10:06:26.741532] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:13.235 10:06:26 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:13.493 10:06:26 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:13.754 [2024-07-15 10:06:27.140805] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:13.754 [2024-07-15 10:06:27.140990] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:13.754 10:06:27 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:14.043 malloc0 00:25:14.043 10:06:27 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:14.043 10:06:27 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9pVHJ6M57s 00:25:14.324 [2024-07-15 10:06:27.717186] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:14.324 10:06:27 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9pVHJ6M57s 00:25:14.324 10:06:27 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:14.324 10:06:27 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:14.324 10:06:27 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:14.324 10:06:27 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.9pVHJ6M57s' 00:25:14.324 10:06:27 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:14.324 10:06:27 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:14.324 10:06:27 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84166 00:25:14.324 10:06:27 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:14.324 10:06:27 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84166 /var/tmp/bdevperf.sock 00:25:14.324 10:06:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84166 ']' 00:25:14.324 10:06:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:14.324 10:06:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:14.324 10:06:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:14.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:14.325 10:06:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:14.325 10:06:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:14.325 [2024-07-15 10:06:27.777676] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:25:14.325 [2024-07-15 10:06:27.777768] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84166 ] 00:25:14.584 [2024-07-15 10:06:27.916506] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:14.584 [2024-07-15 10:06:28.023708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:15.152 10:06:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:15.152 10:06:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:15.152 10:06:28 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9pVHJ6M57s 00:25:15.412 [2024-07-15 10:06:28.840959] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:15.412 [2024-07-15 10:06:28.841060] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:15.412 TLSTESTn1 00:25:15.412 10:06:28 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:25:15.671 Running I/O for 10 seconds... 00:25:25.653 00:25:25.653 Latency(us) 00:25:25.653 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:25.653 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:25.653 Verification LBA range: start 0x0 length 0x2000 00:25:25.653 TLSTESTn1 : 10.01 6200.36 24.22 0.00 0.00 20609.95 4006.57 16598.64 00:25:25.653 =================================================================================================================== 00:25:25.653 Total : 6200.36 24.22 0.00 0.00 20609.95 4006.57 16598.64 00:25:25.653 0 00:25:25.654 10:06:39 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:25.654 10:06:39 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 84166 00:25:25.654 10:06:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84166 ']' 00:25:25.654 10:06:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84166 00:25:25.654 10:06:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:25.654 10:06:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:25.654 10:06:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84166 00:25:25.654 killing process with pid 84166 00:25:25.654 Received shutdown signal, test time was about 10.000000 seconds 00:25:25.654 00:25:25.654 Latency(us) 00:25:25.654 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:25.654 =================================================================================================================== 00:25:25.654 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:25.654 10:06:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:25:25.654 10:06:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:25:25.654 10:06:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84166' 00:25:25.654 10:06:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84166 00:25:25.654 [2024-07-15 10:06:39.087696] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:25.654 10:06:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84166 00:25:25.914 10:06:39 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.9pVHJ6M57s 00:25:25.914 10:06:39 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9pVHJ6M57s 00:25:25.914 10:06:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:25:25.914 10:06:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9pVHJ6M57s 00:25:25.914 10:06:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:25:25.914 10:06:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:25.914 10:06:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:25:25.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:25.914 10:06:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:25.914 10:06:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9pVHJ6M57s 00:25:25.914 10:06:39 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:25.914 10:06:39 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:25.914 10:06:39 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:25.914 10:06:39 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.9pVHJ6M57s' 00:25:25.914 10:06:39 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:25.914 10:06:39 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84318 00:25:25.914 10:06:39 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:25.914 10:06:39 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84318 /var/tmp/bdevperf.sock 00:25:25.914 10:06:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84318 ']' 00:25:25.914 10:06:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:25.914 10:06:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:25.914 10:06:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:25.914 10:06:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:25.914 10:06:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:25.914 10:06:39 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:25.914 [2024-07-15 10:06:39.342254] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:25:25.914 [2024-07-15 10:06:39.342323] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84318 ] 00:25:25.914 [2024-07-15 10:06:39.480036] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:26.174 [2024-07-15 10:06:39.581805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:26.742 10:06:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:26.742 10:06:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:26.742 10:06:40 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9pVHJ6M57s 00:25:27.001 [2024-07-15 10:06:40.358010] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:27.001 [2024-07-15 10:06:40.358074] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:25:27.001 [2024-07-15 10:06:40.358082] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.9pVHJ6M57s 00:25:27.001 2024/07/15 10:06:40 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.9pVHJ6M57s subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-1 Msg=Operation not permitted 00:25:27.001 request: 00:25:27.001 { 00:25:27.001 "method": "bdev_nvme_attach_controller", 00:25:27.001 "params": { 00:25:27.001 "name": "TLSTEST", 00:25:27.001 "trtype": "tcp", 00:25:27.001 "traddr": "10.0.0.2", 00:25:27.001 "adrfam": "ipv4", 00:25:27.001 "trsvcid": "4420", 00:25:27.001 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:27.001 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:27.001 "prchk_reftag": false, 00:25:27.001 "prchk_guard": false, 00:25:27.001 "hdgst": false, 00:25:27.001 "ddgst": false, 00:25:27.001 "psk": "/tmp/tmp.9pVHJ6M57s" 00:25:27.001 } 00:25:27.001 } 00:25:27.001 Got JSON-RPC error response 00:25:27.001 GoRPCClient: error on JSON-RPC call 00:25:27.001 10:06:40 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 84318 00:25:27.001 10:06:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84318 ']' 00:25:27.001 10:06:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84318 00:25:27.001 10:06:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:27.001 10:06:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:27.001 10:06:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84318 00:25:27.001 killing process with pid 84318 00:25:27.001 Received shutdown signal, test time was about 10.000000 seconds 00:25:27.001 00:25:27.001 Latency(us) 00:25:27.001 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:27.001 =================================================================================================================== 00:25:27.001 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:27.001 10:06:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:25:27.001 10:06:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:25:27.001 10:06:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84318' 00:25:27.001 10:06:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84318 00:25:27.001 10:06:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84318 00:25:27.261 10:06:40 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:25:27.261 10:06:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:25:27.261 10:06:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:27.261 10:06:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:27.261 10:06:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:27.261 10:06:40 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 84069 00:25:27.261 10:06:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84069 ']' 00:25:27.261 10:06:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84069 00:25:27.261 10:06:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:27.261 10:06:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:27.261 10:06:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84069 00:25:27.261 killing process with pid 84069 00:25:27.261 10:06:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:27.261 10:06:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:27.261 10:06:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84069' 00:25:27.261 10:06:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84069 00:25:27.261 [2024-07-15 10:06:40.636499] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:27.261 10:06:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84069 00:25:27.261 10:06:40 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:25:27.261 10:06:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:27.261 10:06:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:27.261 10:06:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:27.521 10:06:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84369 00:25:27.521 10:06:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:27.521 10:06:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84369 00:25:27.521 10:06:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84369 ']' 00:25:27.521 10:06:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:27.521 10:06:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:27.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:27.521 10:06:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:27.521 10:06:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:27.521 10:06:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:27.521 [2024-07-15 10:06:40.898207] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:25:27.521 [2024-07-15 10:06:40.898297] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:27.521 [2024-07-15 10:06:41.037971] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:27.780 [2024-07-15 10:06:41.138627] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:27.780 [2024-07-15 10:06:41.138678] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:27.780 [2024-07-15 10:06:41.138684] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:27.780 [2024-07-15 10:06:41.138689] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:27.780 [2024-07-15 10:06:41.138692] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:27.780 [2024-07-15 10:06:41.138713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:28.347 10:06:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:28.347 10:06:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:28.347 10:06:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:28.347 10:06:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:28.347 10:06:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:28.347 10:06:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:28.347 10:06:41 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.9pVHJ6M57s 00:25:28.347 10:06:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:25:28.347 10:06:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.9pVHJ6M57s 00:25:28.347 10:06:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:25:28.347 10:06:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:28.347 10:06:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:25:28.347 10:06:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:28.347 10:06:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.9pVHJ6M57s 00:25:28.347 10:06:41 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.9pVHJ6M57s 00:25:28.347 10:06:41 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:28.606 [2024-07-15 10:06:41.981607] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:28.606 10:06:42 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:28.866 10:06:42 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:28.866 [2024-07-15 10:06:42.352935] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:28.866 [2024-07-15 10:06:42.353114] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:28.866 10:06:42 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:29.126 malloc0 00:25:29.126 10:06:42 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:29.386 10:06:42 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9pVHJ6M57s 00:25:29.386 [2024-07-15 10:06:42.960619] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:25:29.386 [2024-07-15 10:06:42.960665] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:25:29.386 [2024-07-15 10:06:42.960694] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:25:29.386 2024/07/15 10:06:42 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/tmp/tmp.9pVHJ6M57s], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:25:29.386 request: 00:25:29.386 { 00:25:29.386 "method": "nvmf_subsystem_add_host", 00:25:29.386 "params": { 00:25:29.386 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:29.386 "host": "nqn.2016-06.io.spdk:host1", 00:25:29.386 "psk": "/tmp/tmp.9pVHJ6M57s" 00:25:29.386 } 00:25:29.386 } 00:25:29.386 Got JSON-RPC error response 00:25:29.386 GoRPCClient: error on JSON-RPC call 00:25:29.647 10:06:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:25:29.647 10:06:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:29.647 10:06:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:29.647 10:06:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:29.647 10:06:42 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 84369 00:25:29.647 10:06:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84369 ']' 00:25:29.647 10:06:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84369 00:25:29.647 10:06:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:29.647 10:06:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:29.647 10:06:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84369 00:25:29.647 10:06:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:29.647 10:06:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:29.647 killing process with pid 84369 00:25:29.647 10:06:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84369' 00:25:29.647 10:06:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84369 00:25:29.647 10:06:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84369 00:25:29.647 10:06:43 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.9pVHJ6M57s 00:25:29.647 10:06:43 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:25:29.647 10:06:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:29.647 10:06:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:29.647 10:06:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:29.906 10:06:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:29.906 10:06:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84475 00:25:29.906 10:06:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84475 00:25:29.906 10:06:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84475 ']' 00:25:29.906 10:06:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:29.906 10:06:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:29.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:29.906 10:06:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:29.906 10:06:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:29.906 10:06:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:29.906 [2024-07-15 10:06:43.278464] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:25:29.906 [2024-07-15 10:06:43.278533] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:29.906 [2024-07-15 10:06:43.403796] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:30.165 [2024-07-15 10:06:43.510281] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:30.165 [2024-07-15 10:06:43.510323] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:30.165 [2024-07-15 10:06:43.510330] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:30.165 [2024-07-15 10:06:43.510335] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:30.165 [2024-07-15 10:06:43.510340] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:30.165 [2024-07-15 10:06:43.510364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:30.733 10:06:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:30.733 10:06:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:30.733 10:06:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:30.733 10:06:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:30.733 10:06:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:30.733 10:06:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:30.733 10:06:44 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.9pVHJ6M57s 00:25:30.733 10:06:44 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.9pVHJ6M57s 00:25:30.733 10:06:44 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:30.993 [2024-07-15 10:06:44.406869] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:30.993 10:06:44 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:31.253 10:06:44 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:31.253 [2024-07-15 10:06:44.814144] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:31.253 [2024-07-15 10:06:44.814326] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:31.253 10:06:44 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:31.513 malloc0 00:25:31.513 10:06:45 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:31.773 10:06:45 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9pVHJ6M57s 00:25:32.032 [2024-07-15 10:06:45.393812] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:32.032 10:06:45 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:32.032 10:06:45 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=84572 00:25:32.032 10:06:45 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:32.032 10:06:45 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 84572 /var/tmp/bdevperf.sock 00:25:32.032 10:06:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84572 ']' 00:25:32.032 10:06:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:32.032 10:06:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:32.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:32.032 10:06:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:32.032 10:06:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:32.032 10:06:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:32.032 [2024-07-15 10:06:45.449446] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:25:32.032 [2024-07-15 10:06:45.449530] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84572 ] 00:25:32.032 [2024-07-15 10:06:45.585710] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:32.292 [2024-07-15 10:06:45.687336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:32.862 10:06:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:32.862 10:06:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:32.862 10:06:46 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9pVHJ6M57s 00:25:33.121 [2024-07-15 10:06:46.472945] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:33.121 [2024-07-15 10:06:46.473045] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:33.121 TLSTESTn1 00:25:33.121 10:06:46 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:25:33.382 10:06:46 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:25:33.382 "subsystems": [ 00:25:33.382 { 00:25:33.382 "subsystem": "keyring", 00:25:33.382 "config": [] 00:25:33.382 }, 00:25:33.382 { 00:25:33.382 "subsystem": "iobuf", 00:25:33.382 "config": [ 00:25:33.382 { 00:25:33.382 "method": "iobuf_set_options", 00:25:33.382 "params": { 00:25:33.382 "large_bufsize": 135168, 00:25:33.382 "large_pool_count": 1024, 00:25:33.382 "small_bufsize": 8192, 00:25:33.382 "small_pool_count": 8192 00:25:33.382 } 00:25:33.382 } 00:25:33.382 ] 00:25:33.382 }, 00:25:33.382 { 00:25:33.382 "subsystem": "sock", 00:25:33.382 "config": [ 00:25:33.382 { 00:25:33.382 "method": "sock_set_default_impl", 00:25:33.382 "params": { 00:25:33.382 "impl_name": "posix" 00:25:33.382 } 00:25:33.382 }, 00:25:33.382 { 00:25:33.382 "method": "sock_impl_set_options", 00:25:33.382 "params": { 00:25:33.382 "enable_ktls": false, 00:25:33.382 "enable_placement_id": 0, 00:25:33.382 "enable_quickack": false, 00:25:33.382 "enable_recv_pipe": true, 00:25:33.382 "enable_zerocopy_send_client": false, 00:25:33.382 "enable_zerocopy_send_server": true, 00:25:33.382 "impl_name": "ssl", 00:25:33.382 "recv_buf_size": 4096, 00:25:33.382 "send_buf_size": 4096, 00:25:33.382 "tls_version": 0, 00:25:33.382 "zerocopy_threshold": 0 00:25:33.382 } 00:25:33.382 }, 00:25:33.382 { 00:25:33.382 "method": "sock_impl_set_options", 00:25:33.382 "params": { 00:25:33.382 "enable_ktls": false, 00:25:33.382 "enable_placement_id": 0, 00:25:33.382 "enable_quickack": false, 00:25:33.382 "enable_recv_pipe": true, 00:25:33.382 "enable_zerocopy_send_client": false, 00:25:33.382 "enable_zerocopy_send_server": true, 00:25:33.382 "impl_name": "posix", 00:25:33.382 "recv_buf_size": 2097152, 00:25:33.382 "send_buf_size": 2097152, 00:25:33.382 "tls_version": 0, 00:25:33.382 "zerocopy_threshold": 0 00:25:33.382 } 00:25:33.382 } 00:25:33.382 ] 00:25:33.382 }, 00:25:33.382 { 00:25:33.382 "subsystem": "vmd", 00:25:33.382 "config": [] 00:25:33.382 }, 00:25:33.382 { 00:25:33.382 "subsystem": "accel", 00:25:33.382 "config": [ 00:25:33.382 { 00:25:33.382 "method": "accel_set_options", 00:25:33.382 "params": { 00:25:33.382 "buf_count": 2048, 00:25:33.382 "large_cache_size": 16, 00:25:33.382 "sequence_count": 2048, 00:25:33.382 "small_cache_size": 128, 00:25:33.382 "task_count": 2048 00:25:33.382 } 00:25:33.382 } 00:25:33.382 ] 00:25:33.382 }, 00:25:33.382 { 00:25:33.382 "subsystem": "bdev", 00:25:33.382 "config": [ 00:25:33.382 { 00:25:33.382 "method": "bdev_set_options", 00:25:33.382 "params": { 00:25:33.382 "bdev_auto_examine": true, 00:25:33.382 "bdev_io_cache_size": 256, 00:25:33.382 "bdev_io_pool_size": 65535, 00:25:33.382 "iobuf_large_cache_size": 16, 00:25:33.382 "iobuf_small_cache_size": 128 00:25:33.382 } 00:25:33.382 }, 00:25:33.382 { 00:25:33.382 "method": "bdev_raid_set_options", 00:25:33.382 "params": { 00:25:33.382 "process_window_size_kb": 1024 00:25:33.382 } 00:25:33.382 }, 00:25:33.382 { 00:25:33.382 "method": "bdev_iscsi_set_options", 00:25:33.382 "params": { 00:25:33.382 "timeout_sec": 30 00:25:33.382 } 00:25:33.382 }, 00:25:33.382 { 00:25:33.382 "method": "bdev_nvme_set_options", 00:25:33.382 "params": { 00:25:33.382 "action_on_timeout": "none", 00:25:33.382 "allow_accel_sequence": false, 00:25:33.382 "arbitration_burst": 0, 00:25:33.382 "bdev_retry_count": 3, 00:25:33.382 "ctrlr_loss_timeout_sec": 0, 00:25:33.382 "delay_cmd_submit": true, 00:25:33.382 "dhchap_dhgroups": [ 00:25:33.382 "null", 00:25:33.382 "ffdhe2048", 00:25:33.382 "ffdhe3072", 00:25:33.382 "ffdhe4096", 00:25:33.382 "ffdhe6144", 00:25:33.382 "ffdhe8192" 00:25:33.382 ], 00:25:33.382 "dhchap_digests": [ 00:25:33.382 "sha256", 00:25:33.382 "sha384", 00:25:33.382 "sha512" 00:25:33.382 ], 00:25:33.382 "disable_auto_failback": false, 00:25:33.382 "fast_io_fail_timeout_sec": 0, 00:25:33.382 "generate_uuids": false, 00:25:33.382 "high_priority_weight": 0, 00:25:33.382 "io_path_stat": false, 00:25:33.382 "io_queue_requests": 0, 00:25:33.382 "keep_alive_timeout_ms": 10000, 00:25:33.382 "low_priority_weight": 0, 00:25:33.382 "medium_priority_weight": 0, 00:25:33.382 "nvme_adminq_poll_period_us": 10000, 00:25:33.382 "nvme_error_stat": false, 00:25:33.382 "nvme_ioq_poll_period_us": 0, 00:25:33.382 "rdma_cm_event_timeout_ms": 0, 00:25:33.382 "rdma_max_cq_size": 0, 00:25:33.382 "rdma_srq_size": 0, 00:25:33.382 "reconnect_delay_sec": 0, 00:25:33.382 "timeout_admin_us": 0, 00:25:33.382 "timeout_us": 0, 00:25:33.382 "transport_ack_timeout": 0, 00:25:33.382 "transport_retry_count": 4, 00:25:33.382 "transport_tos": 0 00:25:33.382 } 00:25:33.382 }, 00:25:33.382 { 00:25:33.382 "method": "bdev_nvme_set_hotplug", 00:25:33.383 "params": { 00:25:33.383 "enable": false, 00:25:33.383 "period_us": 100000 00:25:33.383 } 00:25:33.383 }, 00:25:33.383 { 00:25:33.383 "method": "bdev_malloc_create", 00:25:33.383 "params": { 00:25:33.383 "block_size": 4096, 00:25:33.383 "name": "malloc0", 00:25:33.383 "num_blocks": 8192, 00:25:33.383 "optimal_io_boundary": 0, 00:25:33.383 "physical_block_size": 4096, 00:25:33.383 "uuid": "4bdce82f-3593-40c1-9ebc-f51d0b1be042" 00:25:33.383 } 00:25:33.383 }, 00:25:33.383 { 00:25:33.383 "method": "bdev_wait_for_examine" 00:25:33.383 } 00:25:33.383 ] 00:25:33.383 }, 00:25:33.383 { 00:25:33.383 "subsystem": "nbd", 00:25:33.383 "config": [] 00:25:33.383 }, 00:25:33.383 { 00:25:33.383 "subsystem": "scheduler", 00:25:33.383 "config": [ 00:25:33.383 { 00:25:33.383 "method": "framework_set_scheduler", 00:25:33.383 "params": { 00:25:33.383 "name": "static" 00:25:33.383 } 00:25:33.383 } 00:25:33.383 ] 00:25:33.383 }, 00:25:33.383 { 00:25:33.383 "subsystem": "nvmf", 00:25:33.383 "config": [ 00:25:33.383 { 00:25:33.383 "method": "nvmf_set_config", 00:25:33.383 "params": { 00:25:33.383 "admin_cmd_passthru": { 00:25:33.383 "identify_ctrlr": false 00:25:33.383 }, 00:25:33.383 "discovery_filter": "match_any" 00:25:33.383 } 00:25:33.383 }, 00:25:33.383 { 00:25:33.383 "method": "nvmf_set_max_subsystems", 00:25:33.383 "params": { 00:25:33.383 "max_subsystems": 1024 00:25:33.383 } 00:25:33.383 }, 00:25:33.383 { 00:25:33.383 "method": "nvmf_set_crdt", 00:25:33.383 "params": { 00:25:33.383 "crdt1": 0, 00:25:33.383 "crdt2": 0, 00:25:33.383 "crdt3": 0 00:25:33.383 } 00:25:33.383 }, 00:25:33.383 { 00:25:33.383 "method": "nvmf_create_transport", 00:25:33.383 "params": { 00:25:33.383 "abort_timeout_sec": 1, 00:25:33.383 "ack_timeout": 0, 00:25:33.383 "buf_cache_size": 4294967295, 00:25:33.383 "c2h_success": false, 00:25:33.383 "data_wr_pool_size": 0, 00:25:33.383 "dif_insert_or_strip": false, 00:25:33.383 "in_capsule_data_size": 4096, 00:25:33.383 "io_unit_size": 131072, 00:25:33.383 "max_aq_depth": 128, 00:25:33.383 "max_io_qpairs_per_ctrlr": 127, 00:25:33.383 "max_io_size": 131072, 00:25:33.383 "max_queue_depth": 128, 00:25:33.383 "num_shared_buffers": 511, 00:25:33.383 "sock_priority": 0, 00:25:33.383 "trtype": "TCP", 00:25:33.383 "zcopy": false 00:25:33.383 } 00:25:33.383 }, 00:25:33.383 { 00:25:33.383 "method": "nvmf_create_subsystem", 00:25:33.383 "params": { 00:25:33.383 "allow_any_host": false, 00:25:33.383 "ana_reporting": false, 00:25:33.383 "max_cntlid": 65519, 00:25:33.383 "max_namespaces": 10, 00:25:33.383 "min_cntlid": 1, 00:25:33.383 "model_number": "SPDK bdev Controller", 00:25:33.383 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:33.383 "serial_number": "SPDK00000000000001" 00:25:33.383 } 00:25:33.383 }, 00:25:33.383 { 00:25:33.383 "method": "nvmf_subsystem_add_host", 00:25:33.383 "params": { 00:25:33.383 "host": "nqn.2016-06.io.spdk:host1", 00:25:33.383 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:33.383 "psk": "/tmp/tmp.9pVHJ6M57s" 00:25:33.383 } 00:25:33.383 }, 00:25:33.383 { 00:25:33.383 "method": "nvmf_subsystem_add_ns", 00:25:33.383 "params": { 00:25:33.383 "namespace": { 00:25:33.383 "bdev_name": "malloc0", 00:25:33.383 "nguid": "4BDCE82F359340C19EBCF51D0B1BE042", 00:25:33.383 "no_auto_visible": false, 00:25:33.383 "nsid": 1, 00:25:33.383 "uuid": "4bdce82f-3593-40c1-9ebc-f51d0b1be042" 00:25:33.383 }, 00:25:33.383 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:25:33.383 } 00:25:33.383 }, 00:25:33.383 { 00:25:33.383 "method": "nvmf_subsystem_add_listener", 00:25:33.383 "params": { 00:25:33.383 "listen_address": { 00:25:33.383 "adrfam": "IPv4", 00:25:33.383 "traddr": "10.0.0.2", 00:25:33.383 "trsvcid": "4420", 00:25:33.383 "trtype": "TCP" 00:25:33.383 }, 00:25:33.383 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:33.383 "secure_channel": true 00:25:33.383 } 00:25:33.383 } 00:25:33.383 ] 00:25:33.383 } 00:25:33.383 ] 00:25:33.383 }' 00:25:33.383 10:06:46 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:25:33.709 10:06:47 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:25:33.709 "subsystems": [ 00:25:33.709 { 00:25:33.709 "subsystem": "keyring", 00:25:33.709 "config": [] 00:25:33.709 }, 00:25:33.709 { 00:25:33.709 "subsystem": "iobuf", 00:25:33.709 "config": [ 00:25:33.709 { 00:25:33.709 "method": "iobuf_set_options", 00:25:33.709 "params": { 00:25:33.709 "large_bufsize": 135168, 00:25:33.709 "large_pool_count": 1024, 00:25:33.709 "small_bufsize": 8192, 00:25:33.709 "small_pool_count": 8192 00:25:33.709 } 00:25:33.709 } 00:25:33.709 ] 00:25:33.709 }, 00:25:33.709 { 00:25:33.709 "subsystem": "sock", 00:25:33.709 "config": [ 00:25:33.709 { 00:25:33.709 "method": "sock_set_default_impl", 00:25:33.709 "params": { 00:25:33.709 "impl_name": "posix" 00:25:33.709 } 00:25:33.709 }, 00:25:33.709 { 00:25:33.709 "method": "sock_impl_set_options", 00:25:33.709 "params": { 00:25:33.709 "enable_ktls": false, 00:25:33.709 "enable_placement_id": 0, 00:25:33.709 "enable_quickack": false, 00:25:33.709 "enable_recv_pipe": true, 00:25:33.709 "enable_zerocopy_send_client": false, 00:25:33.709 "enable_zerocopy_send_server": true, 00:25:33.709 "impl_name": "ssl", 00:25:33.709 "recv_buf_size": 4096, 00:25:33.709 "send_buf_size": 4096, 00:25:33.709 "tls_version": 0, 00:25:33.709 "zerocopy_threshold": 0 00:25:33.709 } 00:25:33.709 }, 00:25:33.709 { 00:25:33.709 "method": "sock_impl_set_options", 00:25:33.709 "params": { 00:25:33.709 "enable_ktls": false, 00:25:33.709 "enable_placement_id": 0, 00:25:33.709 "enable_quickack": false, 00:25:33.709 "enable_recv_pipe": true, 00:25:33.709 "enable_zerocopy_send_client": false, 00:25:33.709 "enable_zerocopy_send_server": true, 00:25:33.709 "impl_name": "posix", 00:25:33.709 "recv_buf_size": 2097152, 00:25:33.709 "send_buf_size": 2097152, 00:25:33.709 "tls_version": 0, 00:25:33.709 "zerocopy_threshold": 0 00:25:33.709 } 00:25:33.709 } 00:25:33.709 ] 00:25:33.709 }, 00:25:33.709 { 00:25:33.709 "subsystem": "vmd", 00:25:33.709 "config": [] 00:25:33.709 }, 00:25:33.709 { 00:25:33.709 "subsystem": "accel", 00:25:33.709 "config": [ 00:25:33.709 { 00:25:33.709 "method": "accel_set_options", 00:25:33.709 "params": { 00:25:33.709 "buf_count": 2048, 00:25:33.709 "large_cache_size": 16, 00:25:33.709 "sequence_count": 2048, 00:25:33.709 "small_cache_size": 128, 00:25:33.709 "task_count": 2048 00:25:33.709 } 00:25:33.709 } 00:25:33.709 ] 00:25:33.709 }, 00:25:33.709 { 00:25:33.709 "subsystem": "bdev", 00:25:33.709 "config": [ 00:25:33.709 { 00:25:33.709 "method": "bdev_set_options", 00:25:33.709 "params": { 00:25:33.709 "bdev_auto_examine": true, 00:25:33.709 "bdev_io_cache_size": 256, 00:25:33.709 "bdev_io_pool_size": 65535, 00:25:33.709 "iobuf_large_cache_size": 16, 00:25:33.709 "iobuf_small_cache_size": 128 00:25:33.709 } 00:25:33.709 }, 00:25:33.709 { 00:25:33.709 "method": "bdev_raid_set_options", 00:25:33.709 "params": { 00:25:33.709 "process_window_size_kb": 1024 00:25:33.709 } 00:25:33.709 }, 00:25:33.709 { 00:25:33.709 "method": "bdev_iscsi_set_options", 00:25:33.709 "params": { 00:25:33.709 "timeout_sec": 30 00:25:33.709 } 00:25:33.709 }, 00:25:33.709 { 00:25:33.709 "method": "bdev_nvme_set_options", 00:25:33.709 "params": { 00:25:33.709 "action_on_timeout": "none", 00:25:33.709 "allow_accel_sequence": false, 00:25:33.709 "arbitration_burst": 0, 00:25:33.709 "bdev_retry_count": 3, 00:25:33.709 "ctrlr_loss_timeout_sec": 0, 00:25:33.709 "delay_cmd_submit": true, 00:25:33.709 "dhchap_dhgroups": [ 00:25:33.709 "null", 00:25:33.709 "ffdhe2048", 00:25:33.709 "ffdhe3072", 00:25:33.709 "ffdhe4096", 00:25:33.709 "ffdhe6144", 00:25:33.709 "ffdhe8192" 00:25:33.709 ], 00:25:33.709 "dhchap_digests": [ 00:25:33.709 "sha256", 00:25:33.709 "sha384", 00:25:33.709 "sha512" 00:25:33.709 ], 00:25:33.709 "disable_auto_failback": false, 00:25:33.709 "fast_io_fail_timeout_sec": 0, 00:25:33.709 "generate_uuids": false, 00:25:33.709 "high_priority_weight": 0, 00:25:33.709 "io_path_stat": false, 00:25:33.709 "io_queue_requests": 512, 00:25:33.709 "keep_alive_timeout_ms": 10000, 00:25:33.709 "low_priority_weight": 0, 00:25:33.709 "medium_priority_weight": 0, 00:25:33.709 "nvme_adminq_poll_period_us": 10000, 00:25:33.709 "nvme_error_stat": false, 00:25:33.709 "nvme_ioq_poll_period_us": 0, 00:25:33.709 "rdma_cm_event_timeout_ms": 0, 00:25:33.709 "rdma_max_cq_size": 0, 00:25:33.709 "rdma_srq_size": 0, 00:25:33.709 "reconnect_delay_sec": 0, 00:25:33.709 "timeout_admin_us": 0, 00:25:33.709 "timeout_us": 0, 00:25:33.709 "transport_ack_timeout": 0, 00:25:33.709 "transport_retry_count": 4, 00:25:33.709 "transport_tos": 0 00:25:33.709 } 00:25:33.709 }, 00:25:33.709 { 00:25:33.709 "method": "bdev_nvme_attach_controller", 00:25:33.709 "params": { 00:25:33.709 "adrfam": "IPv4", 00:25:33.709 "ctrlr_loss_timeout_sec": 0, 00:25:33.709 "ddgst": false, 00:25:33.709 "fast_io_fail_timeout_sec": 0, 00:25:33.709 "hdgst": false, 00:25:33.709 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:33.709 "name": "TLSTEST", 00:25:33.709 "prchk_guard": false, 00:25:33.709 "prchk_reftag": false, 00:25:33.709 "psk": "/tmp/tmp.9pVHJ6M57s", 00:25:33.709 "reconnect_delay_sec": 0, 00:25:33.710 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:33.710 "traddr": "10.0.0.2", 00:25:33.710 "trsvcid": "4420", 00:25:33.710 "trtype": "TCP" 00:25:33.710 } 00:25:33.710 }, 00:25:33.710 { 00:25:33.710 "method": "bdev_nvme_set_hotplug", 00:25:33.710 "params": { 00:25:33.710 "enable": false, 00:25:33.710 "period_us": 100000 00:25:33.710 } 00:25:33.710 }, 00:25:33.710 { 00:25:33.710 "method": "bdev_wait_for_examine" 00:25:33.710 } 00:25:33.710 ] 00:25:33.710 }, 00:25:33.710 { 00:25:33.710 "subsystem": "nbd", 00:25:33.710 "config": [] 00:25:33.710 } 00:25:33.710 ] 00:25:33.710 }' 00:25:33.710 10:06:47 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 84572 00:25:33.710 10:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84572 ']' 00:25:33.710 10:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84572 00:25:33.710 10:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:33.710 10:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:33.710 10:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84572 00:25:33.710 killing process with pid 84572 00:25:33.710 10:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:25:33.710 10:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:25:33.710 10:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84572' 00:25:33.710 10:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84572 00:25:33.710 Received shutdown signal, test time was about 10.000000 seconds 00:25:33.710 00:25:33.710 Latency(us) 00:25:33.710 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:33.710 =================================================================================================================== 00:25:33.710 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:33.710 [2024-07-15 10:06:47.158894] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:33.710 10:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84572 00:25:33.986 10:06:47 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 84475 00:25:33.986 10:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84475 ']' 00:25:33.986 10:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84475 00:25:33.986 10:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:33.986 10:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:33.986 10:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84475 00:25:33.986 10:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:33.986 10:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:33.986 killing process with pid 84475 00:25:33.986 10:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84475' 00:25:33.986 10:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84475 00:25:33.986 [2024-07-15 10:06:47.379839] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:33.986 10:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84475 00:25:34.246 10:06:47 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:25:34.246 "subsystems": [ 00:25:34.246 { 00:25:34.246 "subsystem": "keyring", 00:25:34.246 "config": [] 00:25:34.246 }, 00:25:34.246 { 00:25:34.246 "subsystem": "iobuf", 00:25:34.246 "config": [ 00:25:34.246 { 00:25:34.246 "method": "iobuf_set_options", 00:25:34.246 "params": { 00:25:34.246 "large_bufsize": 135168, 00:25:34.246 "large_pool_count": 1024, 00:25:34.246 "small_bufsize": 8192, 00:25:34.246 "small_pool_count": 8192 00:25:34.246 } 00:25:34.246 } 00:25:34.246 ] 00:25:34.246 }, 00:25:34.246 { 00:25:34.246 "subsystem": "sock", 00:25:34.246 "config": [ 00:25:34.246 { 00:25:34.246 "method": "sock_set_default_impl", 00:25:34.246 "params": { 00:25:34.246 "impl_name": "posix" 00:25:34.246 } 00:25:34.246 }, 00:25:34.246 { 00:25:34.246 "method": "sock_impl_set_options", 00:25:34.246 "params": { 00:25:34.246 "enable_ktls": false, 00:25:34.246 "enable_placement_id": 0, 00:25:34.246 "enable_quickack": false, 00:25:34.246 "enable_recv_pipe": true, 00:25:34.246 "enable_zerocopy_send_client": false, 00:25:34.246 "enable_zerocopy_send_server": true, 00:25:34.246 "impl_name": "ssl", 00:25:34.246 "recv_buf_size": 4096, 00:25:34.246 "send_buf_size": 4096, 00:25:34.246 "tls_version": 0, 00:25:34.246 "zerocopy_threshold": 0 00:25:34.246 } 00:25:34.247 }, 00:25:34.247 { 00:25:34.247 "method": "sock_impl_set_options", 00:25:34.247 "params": { 00:25:34.247 "enable_ktls": false, 00:25:34.247 "enable_placement_id": 0, 00:25:34.247 "enable_quickack": false, 00:25:34.247 "enable_recv_pipe": true, 00:25:34.247 "enable_zerocopy_send_client": false, 00:25:34.247 "enable_zerocopy_send_server": true, 00:25:34.247 "impl_name": "posix", 00:25:34.247 "recv_buf_size": 2097152, 00:25:34.247 "send_buf_size": 2097152, 00:25:34.247 "tls_version": 0, 00:25:34.247 "zerocopy_threshold": 0 00:25:34.247 } 00:25:34.247 } 00:25:34.247 ] 00:25:34.247 }, 00:25:34.247 { 00:25:34.247 "subsystem": "vmd", 00:25:34.247 "config": [] 00:25:34.247 }, 00:25:34.247 { 00:25:34.247 "subsystem": "accel", 00:25:34.247 "config": [ 00:25:34.247 { 00:25:34.247 "method": "accel_set_options", 00:25:34.247 "params": { 00:25:34.247 "buf_count": 2048, 00:25:34.247 "large_cache_size": 16, 00:25:34.247 "sequence_count": 2048, 00:25:34.247 "small_cache_size": 128, 00:25:34.247 "task_count": 2048 00:25:34.247 } 00:25:34.247 } 00:25:34.247 ] 00:25:34.247 }, 00:25:34.247 { 00:25:34.247 "subsystem": "bdev", 00:25:34.247 "config": [ 00:25:34.247 { 00:25:34.247 "method": "bdev_set_options", 00:25:34.247 "params": { 00:25:34.247 "bdev_auto_examine": true, 00:25:34.247 "bdev_io_cache_size": 256, 00:25:34.247 "bdev_io_pool_size": 65535, 00:25:34.247 "iobuf_large_cache_size": 16, 00:25:34.247 "iobuf_small_cache_size": 128 00:25:34.247 } 00:25:34.247 }, 00:25:34.247 { 00:25:34.247 "method": "bdev_raid_set_options", 00:25:34.247 "params": { 00:25:34.247 "process_window_size_kb": 1024 00:25:34.247 } 00:25:34.247 }, 00:25:34.247 { 00:25:34.247 "method": "bdev_iscsi_set_options", 00:25:34.247 "params": { 00:25:34.247 "timeout_sec": 30 00:25:34.247 } 00:25:34.247 }, 00:25:34.247 { 00:25:34.247 "method": "bdev_nvme_set_options", 00:25:34.247 "params": { 00:25:34.247 "action_on_timeout": "none", 00:25:34.247 "allow_accel_sequence": false, 00:25:34.247 "arbitration_burst": 0, 00:25:34.247 "bdev_retry_count": 3, 00:25:34.247 "ctrlr_loss_timeout_sec": 0, 00:25:34.247 "delay_cmd_submit": true, 00:25:34.247 "dhchap_dhgroups": [ 00:25:34.247 "null", 00:25:34.247 "ffdhe2048", 00:25:34.247 "ffdhe3072", 00:25:34.247 "ffdhe4096", 00:25:34.247 "ffdhe6144", 00:25:34.247 "ffdhe8192" 00:25:34.247 ], 00:25:34.247 "dhchap_digests": [ 00:25:34.247 "sha256", 00:25:34.247 "sha384", 00:25:34.247 "sha512" 00:25:34.247 ], 00:25:34.247 "disable_auto_failback": false, 00:25:34.247 "fast_io_fail_timeout_sec": 0, 00:25:34.247 "generate_uuids": false, 00:25:34.247 "high_priority_weight": 0, 00:25:34.247 "io_path_stat": false, 00:25:34.247 "io_queue_requests": 0, 00:25:34.247 "keep_alive_timeout_ms": 10000, 00:25:34.247 "low_priority_weight": 0, 00:25:34.247 "medium_priority_weight": 0, 00:25:34.247 "nvme_adminq_poll_period_us": 10000, 00:25:34.247 "nvme_error_stat": false, 00:25:34.247 "nvme_ioq_poll_period_us": 0, 00:25:34.247 "rdma_cm_event_timeout_ms": 0, 00:25:34.247 "rdma_max_cq_size": 0, 00:25:34.247 "rdma_srq_size": 0, 00:25:34.247 "reconnect_delay_sec": 0, 00:25:34.247 "timeout_admin_us": 0, 00:25:34.247 "timeout_us": 0, 00:25:34.247 "transport_ack_timeout": 0, 00:25:34.247 "transport_retry_count": 4, 00:25:34.247 "transport_tos": 0 00:25:34.247 } 00:25:34.247 }, 00:25:34.247 { 00:25:34.247 "method": "bdev_nvme_set_hotplug", 00:25:34.247 "params": { 00:25:34.247 "enable": false, 00:25:34.247 "period_us": 100000 00:25:34.247 } 00:25:34.247 }, 00:25:34.247 { 00:25:34.247 "method": "bdev_malloc_create", 00:25:34.247 "params": { 00:25:34.247 "block_size": 4096, 00:25:34.247 "name": "malloc0", 00:25:34.247 "num_blocks": 8192, 00:25:34.247 "optimal_io_boundary": 0, 00:25:34.247 "physical_block_size": 4096, 00:25:34.247 "uuid": "4bdce82f-3593-40c1-9ebc-f51d0b1be042" 00:25:34.247 } 00:25:34.247 }, 00:25:34.247 { 00:25:34.247 "method": "bdev_wait_for_examine" 00:25:34.247 } 00:25:34.247 ] 00:25:34.247 }, 00:25:34.247 { 00:25:34.247 "subsystem": "nbd", 00:25:34.247 "config": [] 00:25:34.247 }, 00:25:34.247 { 00:25:34.247 "subsystem": "scheduler", 00:25:34.247 "config": [ 00:25:34.247 { 00:25:34.247 "method": "framework_set_scheduler", 00:25:34.247 "params": { 00:25:34.247 "name": "static" 00:25:34.247 } 00:25:34.247 } 00:25:34.247 ] 00:25:34.247 }, 00:25:34.247 { 00:25:34.247 "subsystem": "nvmf", 00:25:34.247 "config": [ 00:25:34.247 { 00:25:34.247 "method": "nvmf_set_config", 00:25:34.247 "params": { 00:25:34.247 "admin_cmd_passthru": { 00:25:34.247 "identify_ctrlr": false 00:25:34.247 }, 00:25:34.247 "discovery_filter": "match_any" 00:25:34.247 } 00:25:34.247 }, 00:25:34.247 { 00:25:34.247 "method": "nvmf_set_max_subsystems", 00:25:34.247 "params": { 00:25:34.247 "max_subsystems": 1024 00:25:34.247 } 00:25:34.247 }, 00:25:34.247 { 00:25:34.247 "method": "nvmf_set_crdt", 00:25:34.247 "params": { 00:25:34.247 "crdt1": 0, 00:25:34.247 "crdt2": 0, 00:25:34.247 "crdt3": 0 00:25:34.247 } 00:25:34.247 }, 00:25:34.247 { 00:25:34.247 "method": "nvmf_create_transport", 00:25:34.247 "params": { 00:25:34.247 "abort_timeout_sec": 1, 00:25:34.247 "ack_timeout": 0, 00:25:34.247 "buf_cache_size": 4294967295, 00:25:34.247 "c2h_success": false, 00:25:34.247 "data_wr_pool_size": 0, 00:25:34.247 "dif_insert_or_strip": false, 00:25:34.247 "in_capsule_data_size": 4096, 00:25:34.247 "io_unit_size": 131072, 00:25:34.247 "max_aq_depth": 128, 00:25:34.247 "max_io_qpairs_per_ctrlr": 127, 00:25:34.247 "max_io_size": 131072, 00:25:34.247 "max_queue_depth": 128, 00:25:34.247 "num_shared_buffers": 511, 00:25:34.247 "sock_priority": 0, 00:25:34.247 "trtype": "TCP", 00:25:34.247 "zcopy": false 00:25:34.247 } 00:25:34.247 }, 00:25:34.247 { 00:25:34.247 "method": "nvmf_create_subsystem", 00:25:34.247 "params": { 00:25:34.247 "allow_any_host": false, 00:25:34.247 "ana_reporting": false, 00:25:34.247 "max_cntlid": 65519, 00:25:34.247 "max_namespaces": 10, 00:25:34.247 "min_cntlid": 1, 00:25:34.247 "model_number": "SPDK bdev Controller", 00:25:34.247 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:34.247 "serial_number": "SPDK00000000000001" 00:25:34.247 } 00:25:34.247 }, 00:25:34.247 { 00:25:34.247 "method": "nvmf_subsystem_add_host", 00:25:34.247 "params": { 00:25:34.247 "host": "nqn.2016-06.io.spdk:host1", 00:25:34.247 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:34.247 "psk": "/tmp/tmp.9pVHJ6M57s" 00:25:34.247 } 00:25:34.247 }, 00:25:34.247 { 00:25:34.247 "method": "nvmf_subsystem_add_ns", 00:25:34.247 "params": { 00:25:34.247 "namespace": { 00:25:34.247 "bdev_name": "malloc0", 00:25:34.247 "nguid": "4BDCE82F359340C19EBCF51D0B1BE042", 00:25:34.247 "no_auto_visible": false, 00:25:34.247 "nsid": 1, 00:25:34.247 "uuid": "4bdce82f-3593-40c1-9ebc-f51d0b1be042" 00:25:34.247 }, 00:25:34.247 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:25:34.247 } 00:25:34.247 }, 00:25:34.247 { 00:25:34.247 "method": "nvmf_subsystem_add_listener", 00:25:34.247 "params": { 00:25:34.247 "listen_address": { 00:25:34.247 "adrfam": "IPv4", 00:25:34.247 "traddr": "10.0.0.2", 00:25:34.247 "trsvcid": "4420", 00:25:34.247 "trtype": "TCP" 00:25:34.247 }, 00:25:34.247 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:34.247 "secure_channel": true 00:25:34.247 } 00:25:34.247 } 00:25:34.247 ] 00:25:34.247 } 00:25:34.247 ] 00:25:34.247 }' 00:25:34.247 10:06:47 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:25:34.247 10:06:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:34.247 10:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:34.247 10:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:34.247 10:06:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84651 00:25:34.247 10:06:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:25:34.248 10:06:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84651 00:25:34.248 10:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84651 ']' 00:25:34.248 10:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:34.248 10:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:34.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:34.248 10:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:34.248 10:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:34.248 10:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:34.248 [2024-07-15 10:06:47.645018] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:25:34.248 [2024-07-15 10:06:47.645091] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:34.248 [2024-07-15 10:06:47.772789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:34.508 [2024-07-15 10:06:47.882191] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:34.508 [2024-07-15 10:06:47.882235] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:34.508 [2024-07-15 10:06:47.882242] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:34.508 [2024-07-15 10:06:47.882246] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:34.508 [2024-07-15 10:06:47.882251] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:34.508 [2024-07-15 10:06:47.882318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:34.508 [2024-07-15 10:06:48.087593] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:34.768 [2024-07-15 10:06:48.103501] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:34.768 [2024-07-15 10:06:48.119466] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:34.768 [2024-07-15 10:06:48.119644] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:35.029 10:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:35.029 10:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:35.029 10:06:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:35.029 10:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:35.029 10:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:35.029 10:06:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:35.029 10:06:48 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=84689 00:25:35.029 10:06:48 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 84689 /var/tmp/bdevperf.sock 00:25:35.029 10:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84689 ']' 00:25:35.029 10:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:35.029 10:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:35.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:35.029 10:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:35.029 10:06:48 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:25:35.029 10:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:35.029 10:06:48 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:25:35.029 "subsystems": [ 00:25:35.029 { 00:25:35.029 "subsystem": "keyring", 00:25:35.029 "config": [] 00:25:35.029 }, 00:25:35.029 { 00:25:35.029 "subsystem": "iobuf", 00:25:35.029 "config": [ 00:25:35.029 { 00:25:35.029 "method": "iobuf_set_options", 00:25:35.029 "params": { 00:25:35.029 "large_bufsize": 135168, 00:25:35.029 "large_pool_count": 1024, 00:25:35.029 "small_bufsize": 8192, 00:25:35.029 "small_pool_count": 8192 00:25:35.029 } 00:25:35.029 } 00:25:35.029 ] 00:25:35.029 }, 00:25:35.029 { 00:25:35.029 "subsystem": "sock", 00:25:35.029 "config": [ 00:25:35.029 { 00:25:35.029 "method": "sock_set_default_impl", 00:25:35.029 "params": { 00:25:35.029 "impl_name": "posix" 00:25:35.029 } 00:25:35.029 }, 00:25:35.029 { 00:25:35.029 "method": "sock_impl_set_options", 00:25:35.029 "params": { 00:25:35.029 "enable_ktls": false, 00:25:35.029 "enable_placement_id": 0, 00:25:35.029 "enable_quickack": false, 00:25:35.029 "enable_recv_pipe": true, 00:25:35.029 "enable_zerocopy_send_client": false, 00:25:35.029 "enable_zerocopy_send_server": true, 00:25:35.029 "impl_name": "ssl", 00:25:35.029 "recv_buf_size": 4096, 00:25:35.029 "send_buf_size": 4096, 00:25:35.029 "tls_version": 0, 00:25:35.029 "zerocopy_threshold": 0 00:25:35.029 } 00:25:35.029 }, 00:25:35.029 { 00:25:35.029 "method": "sock_impl_set_options", 00:25:35.029 "params": { 00:25:35.029 "enable_ktls": false, 00:25:35.029 "enable_placement_id": 0, 00:25:35.029 "enable_quickack": false, 00:25:35.029 "enable_recv_pipe": true, 00:25:35.029 "enable_zerocopy_send_client": false, 00:25:35.029 "enable_zerocopy_send_server": true, 00:25:35.029 "impl_name": "posix", 00:25:35.029 "recv_buf_size": 2097152, 00:25:35.029 "send_buf_size": 2097152, 00:25:35.029 "tls_version": 0, 00:25:35.029 "zerocopy_threshold": 0 00:25:35.029 } 00:25:35.029 } 00:25:35.029 ] 00:25:35.029 }, 00:25:35.029 { 00:25:35.029 "subsystem": "vmd", 00:25:35.029 "config": [] 00:25:35.029 }, 00:25:35.029 { 00:25:35.029 "subsystem": "accel", 00:25:35.030 "config": [ 00:25:35.030 { 00:25:35.030 "method": "accel_set_options", 00:25:35.030 "params": { 00:25:35.030 "buf_count": 2048, 00:25:35.030 "large_cache_size": 16, 00:25:35.030 "sequence_count": 2048, 00:25:35.030 "small_cache_size": 128, 00:25:35.030 "task_count": 2048 00:25:35.030 } 00:25:35.030 } 00:25:35.030 ] 00:25:35.030 }, 00:25:35.030 { 00:25:35.030 "subsystem": "bdev", 00:25:35.030 "config": [ 00:25:35.030 { 00:25:35.030 "method": "bdev_set_options", 00:25:35.030 "params": { 00:25:35.030 "bdev_auto_examine": true, 00:25:35.030 "bdev_io_cache_size": 256, 00:25:35.030 "bdev_io_pool_size": 65535, 00:25:35.030 "iobuf_large_cache_size": 16, 00:25:35.030 "iobuf_small_cache_size": 128 00:25:35.030 } 00:25:35.030 }, 00:25:35.030 { 00:25:35.030 "method": "bdev_raid_set_options", 00:25:35.030 "params": { 00:25:35.030 "process_window_size_kb": 1024 00:25:35.030 } 00:25:35.030 }, 00:25:35.030 { 00:25:35.030 "method": "bdev_iscsi_set_options", 00:25:35.030 "params": { 00:25:35.030 "timeout_sec": 30 00:25:35.030 } 00:25:35.030 }, 00:25:35.030 { 00:25:35.030 "method": "bdev_nvme_set_options", 00:25:35.030 "params": { 00:25:35.030 "action_on_timeout": "none", 00:25:35.030 "allow_accel_sequence": false, 00:25:35.030 "arbitration_burst": 0, 00:25:35.030 "bdev_retry_count": 3, 00:25:35.030 "ctrlr_loss_timeout_sec": 0, 00:25:35.030 "delay_cmd_submit": true, 00:25:35.030 "dhchap_dhgroups": [ 00:25:35.030 "null", 00:25:35.030 "ffdhe2048", 00:25:35.030 "ffdhe3072", 00:25:35.030 "ffdhe4096", 00:25:35.030 "ffdhe6144", 00:25:35.030 "ffdhe8192" 00:25:35.030 ], 00:25:35.030 "dhchap_digests": [ 00:25:35.030 "sha256", 00:25:35.030 "sha384", 00:25:35.030 "sha512" 00:25:35.030 ], 00:25:35.030 "disable_auto_failback": false, 00:25:35.030 "fast_io_fail_timeout_sec": 0, 00:25:35.030 "generate_uuids": false, 00:25:35.030 "high_priority_weight": 0, 00:25:35.030 "io_path_stat": false, 00:25:35.030 "io_queue_requests": 512, 00:25:35.030 "keep_alive_timeout_ms": 10000, 00:25:35.030 "low_priority_weight": 0, 00:25:35.030 "medium_priority_weight": 0, 00:25:35.030 "nvme_adminq_poll_period_us": 10000, 00:25:35.030 "nvme_error_stat": false, 00:25:35.030 "nvme_ioq_poll_period_us": 0, 00:25:35.030 "rdma_cm_event_timeout_ms": 0, 00:25:35.030 "rdma_max_cq_size": 0, 00:25:35.030 "rdma_srq_size": 0, 00:25:35.030 "reconnect_delay_sec": 0, 00:25:35.030 "timeout_admin_us": 0, 00:25:35.030 "timeout_us": 0, 00:25:35.030 "transport_ack_timeout": 0, 00:25:35.030 "transport_retry_count": 4, 00:25:35.030 "transport_tos": 0 00:25:35.030 } 00:25:35.030 }, 00:25:35.030 { 00:25:35.030 "method": "bdev_nvme_attach_controller", 00:25:35.030 "params": { 00:25:35.030 "adrfam": "IPv4", 00:25:35.030 "ctrlr_loss_timeout_sec": 0, 00:25:35.030 "ddgst": false, 00:25:35.030 "fast_io_fail_timeout_sec": 0, 00:25:35.030 "hdgst": false, 00:25:35.030 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:35.030 "name": "TLSTEST", 00:25:35.030 "prchk_guard": false, 00:25:35.030 "prchk_reftag": false, 00:25:35.030 "psk": "/tmp/tmp.9pVHJ6M57s", 00:25:35.030 "reconnect_delay_sec": 0, 00:25:35.030 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:35.030 "traddr": "10.0.0.2", 00:25:35.030 "trsvcid": "4420", 00:25:35.030 "trtype": "TCP" 00:25:35.030 } 00:25:35.030 }, 00:25:35.030 { 00:25:35.030 "method": "bdev_nvme_set_hotplug", 00:25:35.030 "params": { 00:25:35.030 "enable": false, 00:25:35.030 "period_us": 100000 00:25:35.030 } 00:25:35.030 }, 00:25:35.030 { 00:25:35.030 "method": "bdev_wait_for_examine" 00:25:35.030 } 00:25:35.030 ] 00:25:35.030 }, 00:25:35.030 { 00:25:35.030 "subsystem": "nbd", 00:25:35.030 "config": [] 00:25:35.030 } 00:25:35.030 ] 00:25:35.030 }' 00:25:35.030 10:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:35.030 [2024-07-15 10:06:48.599398] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:25:35.030 [2024-07-15 10:06:48.599554] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84689 ] 00:25:35.290 [2024-07-15 10:06:48.726322] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:35.290 [2024-07-15 10:06:48.830223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:35.550 [2024-07-15 10:06:48.973951] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:35.550 [2024-07-15 10:06:48.974061] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:36.119 10:06:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:36.119 10:06:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:36.119 10:06:49 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:25:36.119 Running I/O for 10 seconds... 00:25:46.105 00:25:46.105 Latency(us) 00:25:46.105 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:46.105 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:46.105 Verification LBA range: start 0x0 length 0x2000 00:25:46.105 TLSTESTn1 : 10.01 6264.94 24.47 0.00 0.00 20398.30 3977.95 18544.68 00:25:46.105 =================================================================================================================== 00:25:46.105 Total : 6264.94 24.47 0.00 0.00 20398.30 3977.95 18544.68 00:25:46.105 0 00:25:46.105 10:06:59 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:46.105 10:06:59 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 84689 00:25:46.105 10:06:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84689 ']' 00:25:46.105 10:06:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84689 00:25:46.105 10:06:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:46.105 10:06:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:46.105 10:06:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84689 00:25:46.105 10:06:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:25:46.105 10:06:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:25:46.105 10:06:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84689' 00:25:46.105 killing process with pid 84689 00:25:46.105 10:06:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84689 00:25:46.105 Received shutdown signal, test time was about 10.000000 seconds 00:25:46.105 00:25:46.105 Latency(us) 00:25:46.105 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:46.105 =================================================================================================================== 00:25:46.105 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:46.105 [2024-07-15 10:06:59.616698] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:46.105 10:06:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84689 00:25:46.365 10:06:59 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 84651 00:25:46.365 10:06:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84651 ']' 00:25:46.365 10:06:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84651 00:25:46.365 10:06:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:46.365 10:06:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:46.365 10:06:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84651 00:25:46.365 10:06:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:46.365 10:06:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:46.365 killing process with pid 84651 00:25:46.365 10:06:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84651' 00:25:46.365 10:06:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84651 00:25:46.365 [2024-07-15 10:06:59.831690] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:46.365 10:06:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84651 00:25:46.625 10:07:00 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:25:46.625 10:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:46.625 10:07:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:46.625 10:07:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:46.625 10:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84840 00:25:46.625 10:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:46.625 10:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84840 00:25:46.625 10:07:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84840 ']' 00:25:46.625 10:07:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:46.625 10:07:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:46.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:46.625 10:07:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:46.625 10:07:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:46.625 10:07:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:46.625 [2024-07-15 10:07:00.095201] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:25:46.625 [2024-07-15 10:07:00.095273] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:46.883 [2024-07-15 10:07:00.233974] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.883 [2024-07-15 10:07:00.336452] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:46.883 [2024-07-15 10:07:00.336503] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:46.883 [2024-07-15 10:07:00.336510] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:46.883 [2024-07-15 10:07:00.336514] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:46.883 [2024-07-15 10:07:00.336519] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:46.883 [2024-07-15 10:07:00.336542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:47.452 10:07:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:47.452 10:07:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:47.452 10:07:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:47.452 10:07:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:47.452 10:07:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:47.452 10:07:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:47.452 10:07:01 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.9pVHJ6M57s 00:25:47.452 10:07:01 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.9pVHJ6M57s 00:25:47.452 10:07:01 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:47.712 [2024-07-15 10:07:01.182905] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:47.712 10:07:01 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:47.971 10:07:01 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:48.231 [2024-07-15 10:07:01.558211] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:48.231 [2024-07-15 10:07:01.558449] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:48.231 10:07:01 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:48.231 malloc0 00:25:48.231 10:07:01 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:48.492 10:07:01 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9pVHJ6M57s 00:25:48.751 [2024-07-15 10:07:02.133809] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:48.751 10:07:02 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:25:48.751 10:07:02 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=84938 00:25:48.751 10:07:02 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:48.751 10:07:02 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 84938 /var/tmp/bdevperf.sock 00:25:48.752 10:07:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84938 ']' 00:25:48.752 10:07:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:48.752 10:07:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:48.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:48.752 10:07:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:48.752 10:07:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:48.752 10:07:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:48.752 [2024-07-15 10:07:02.187521] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:25:48.752 [2024-07-15 10:07:02.187588] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84938 ] 00:25:48.752 [2024-07-15 10:07:02.323620] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:49.011 [2024-07-15 10:07:02.427290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:49.579 10:07:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:49.579 10:07:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:49.579 10:07:03 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.9pVHJ6M57s 00:25:49.841 10:07:03 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:25:49.841 [2024-07-15 10:07:03.405218] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:50.100 nvme0n1 00:25:50.100 10:07:03 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:50.100 Running I/O for 1 seconds... 00:25:51.038 00:25:51.038 Latency(us) 00:25:51.038 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:51.038 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:51.038 Verification LBA range: start 0x0 length 0x2000 00:25:51.038 nvme0n1 : 1.01 6400.41 25.00 0.00 0.00 19858.61 4006.57 16026.27 00:25:51.038 =================================================================================================================== 00:25:51.038 Total : 6400.41 25.00 0.00 0.00 19858.61 4006.57 16026.27 00:25:51.038 0 00:25:51.296 10:07:04 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 84938 00:25:51.296 10:07:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84938 ']' 00:25:51.296 10:07:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84938 00:25:51.296 10:07:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:51.296 10:07:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:51.297 10:07:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84938 00:25:51.297 10:07:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:51.297 10:07:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:51.297 10:07:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84938' 00:25:51.297 killing process with pid 84938 00:25:51.297 10:07:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84938 00:25:51.297 Received shutdown signal, test time was about 1.000000 seconds 00:25:51.297 00:25:51.297 Latency(us) 00:25:51.297 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:51.297 =================================================================================================================== 00:25:51.297 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:51.297 10:07:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84938 00:25:51.297 10:07:04 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 84840 00:25:51.297 10:07:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84840 ']' 00:25:51.297 10:07:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84840 00:25:51.297 10:07:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:51.297 10:07:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:51.297 10:07:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84840 00:25:51.297 killing process with pid 84840 00:25:51.297 10:07:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:51.297 10:07:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:51.297 10:07:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84840' 00:25:51.297 10:07:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84840 00:25:51.297 [2024-07-15 10:07:04.870007] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:51.297 10:07:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84840 00:25:51.557 10:07:05 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:25:51.557 10:07:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:51.557 10:07:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:51.557 10:07:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:51.557 10:07:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=85009 00:25:51.557 10:07:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 85009 00:25:51.557 10:07:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85009 ']' 00:25:51.557 10:07:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:51.557 10:07:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:51.557 10:07:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:51.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:51.557 10:07:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:51.557 10:07:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:51.557 10:07:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:51.816 [2024-07-15 10:07:05.142551] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:25:51.816 [2024-07-15 10:07:05.142645] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:51.816 [2024-07-15 10:07:05.284180] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:51.816 [2024-07-15 10:07:05.387597] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:51.816 [2024-07-15 10:07:05.387644] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:51.816 [2024-07-15 10:07:05.387650] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:51.816 [2024-07-15 10:07:05.387655] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:51.816 [2024-07-15 10:07:05.387666] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:51.816 [2024-07-15 10:07:05.387690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:52.763 10:07:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:52.763 10:07:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:52.763 10:07:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:52.763 10:07:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:52.763 10:07:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:52.763 10:07:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:52.763 10:07:06 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:25:52.763 10:07:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.763 10:07:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:52.763 [2024-07-15 10:07:06.054262] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:52.763 malloc0 00:25:52.763 [2024-07-15 10:07:06.082721] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:52.763 [2024-07-15 10:07:06.082893] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:52.763 10:07:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.763 10:07:06 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=85059 00:25:52.763 10:07:06 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 85059 /var/tmp/bdevperf.sock 00:25:52.763 10:07:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85059 ']' 00:25:52.763 10:07:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:52.763 10:07:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:52.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:52.763 10:07:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:52.763 10:07:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:52.763 10:07:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:52.763 10:07:06 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:25:52.763 [2024-07-15 10:07:06.161476] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:25:52.763 [2024-07-15 10:07:06.161543] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85059 ] 00:25:52.763 [2024-07-15 10:07:06.298579] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:53.038 [2024-07-15 10:07:06.401935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:53.607 10:07:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:53.607 10:07:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:53.608 10:07:07 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.9pVHJ6M57s 00:25:53.867 10:07:07 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:25:53.867 [2024-07-15 10:07:07.385206] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:54.126 nvme0n1 00:25:54.126 10:07:07 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:54.126 Running I/O for 1 seconds... 00:25:55.063 00:25:55.063 Latency(us) 00:25:55.063 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:55.063 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:55.063 Verification LBA range: start 0x0 length 0x2000 00:25:55.063 nvme0n1 : 1.01 6408.94 25.03 0.00 0.00 19831.20 4264.13 17743.37 00:25:55.063 =================================================================================================================== 00:25:55.063 Total : 6408.94 25.03 0.00 0.00 19831.20 4264.13 17743.37 00:25:55.063 0 00:25:55.063 10:07:08 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:25:55.063 10:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.063 10:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:55.322 10:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.322 10:07:08 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:25:55.322 "subsystems": [ 00:25:55.322 { 00:25:55.322 "subsystem": "keyring", 00:25:55.322 "config": [ 00:25:55.322 { 00:25:55.322 "method": "keyring_file_add_key", 00:25:55.322 "params": { 00:25:55.322 "name": "key0", 00:25:55.322 "path": "/tmp/tmp.9pVHJ6M57s" 00:25:55.322 } 00:25:55.322 } 00:25:55.322 ] 00:25:55.322 }, 00:25:55.322 { 00:25:55.322 "subsystem": "iobuf", 00:25:55.322 "config": [ 00:25:55.323 { 00:25:55.323 "method": "iobuf_set_options", 00:25:55.323 "params": { 00:25:55.323 "large_bufsize": 135168, 00:25:55.323 "large_pool_count": 1024, 00:25:55.323 "small_bufsize": 8192, 00:25:55.323 "small_pool_count": 8192 00:25:55.323 } 00:25:55.323 } 00:25:55.323 ] 00:25:55.323 }, 00:25:55.323 { 00:25:55.323 "subsystem": "sock", 00:25:55.323 "config": [ 00:25:55.323 { 00:25:55.323 "method": "sock_set_default_impl", 00:25:55.323 "params": { 00:25:55.323 "impl_name": "posix" 00:25:55.323 } 00:25:55.323 }, 00:25:55.323 { 00:25:55.323 "method": "sock_impl_set_options", 00:25:55.323 "params": { 00:25:55.323 "enable_ktls": false, 00:25:55.323 "enable_placement_id": 0, 00:25:55.323 "enable_quickack": false, 00:25:55.323 "enable_recv_pipe": true, 00:25:55.323 "enable_zerocopy_send_client": false, 00:25:55.323 "enable_zerocopy_send_server": true, 00:25:55.323 "impl_name": "ssl", 00:25:55.323 "recv_buf_size": 4096, 00:25:55.323 "send_buf_size": 4096, 00:25:55.323 "tls_version": 0, 00:25:55.323 "zerocopy_threshold": 0 00:25:55.323 } 00:25:55.323 }, 00:25:55.323 { 00:25:55.323 "method": "sock_impl_set_options", 00:25:55.323 "params": { 00:25:55.323 "enable_ktls": false, 00:25:55.323 "enable_placement_id": 0, 00:25:55.323 "enable_quickack": false, 00:25:55.323 "enable_recv_pipe": true, 00:25:55.323 "enable_zerocopy_send_client": false, 00:25:55.323 "enable_zerocopy_send_server": true, 00:25:55.323 "impl_name": "posix", 00:25:55.323 "recv_buf_size": 2097152, 00:25:55.323 "send_buf_size": 2097152, 00:25:55.323 "tls_version": 0, 00:25:55.323 "zerocopy_threshold": 0 00:25:55.323 } 00:25:55.323 } 00:25:55.323 ] 00:25:55.323 }, 00:25:55.323 { 00:25:55.323 "subsystem": "vmd", 00:25:55.323 "config": [] 00:25:55.323 }, 00:25:55.323 { 00:25:55.323 "subsystem": "accel", 00:25:55.323 "config": [ 00:25:55.323 { 00:25:55.323 "method": "accel_set_options", 00:25:55.323 "params": { 00:25:55.323 "buf_count": 2048, 00:25:55.323 "large_cache_size": 16, 00:25:55.323 "sequence_count": 2048, 00:25:55.323 "small_cache_size": 128, 00:25:55.323 "task_count": 2048 00:25:55.323 } 00:25:55.323 } 00:25:55.323 ] 00:25:55.323 }, 00:25:55.323 { 00:25:55.323 "subsystem": "bdev", 00:25:55.323 "config": [ 00:25:55.323 { 00:25:55.323 "method": "bdev_set_options", 00:25:55.323 "params": { 00:25:55.323 "bdev_auto_examine": true, 00:25:55.323 "bdev_io_cache_size": 256, 00:25:55.323 "bdev_io_pool_size": 65535, 00:25:55.323 "iobuf_large_cache_size": 16, 00:25:55.323 "iobuf_small_cache_size": 128 00:25:55.323 } 00:25:55.323 }, 00:25:55.323 { 00:25:55.323 "method": "bdev_raid_set_options", 00:25:55.323 "params": { 00:25:55.323 "process_window_size_kb": 1024 00:25:55.323 } 00:25:55.323 }, 00:25:55.323 { 00:25:55.323 "method": "bdev_iscsi_set_options", 00:25:55.323 "params": { 00:25:55.323 "timeout_sec": 30 00:25:55.323 } 00:25:55.323 }, 00:25:55.323 { 00:25:55.323 "method": "bdev_nvme_set_options", 00:25:55.323 "params": { 00:25:55.323 "action_on_timeout": "none", 00:25:55.323 "allow_accel_sequence": false, 00:25:55.323 "arbitration_burst": 0, 00:25:55.323 "bdev_retry_count": 3, 00:25:55.323 "ctrlr_loss_timeout_sec": 0, 00:25:55.323 "delay_cmd_submit": true, 00:25:55.323 "dhchap_dhgroups": [ 00:25:55.323 "null", 00:25:55.323 "ffdhe2048", 00:25:55.323 "ffdhe3072", 00:25:55.323 "ffdhe4096", 00:25:55.323 "ffdhe6144", 00:25:55.323 "ffdhe8192" 00:25:55.323 ], 00:25:55.323 "dhchap_digests": [ 00:25:55.323 "sha256", 00:25:55.323 "sha384", 00:25:55.323 "sha512" 00:25:55.323 ], 00:25:55.323 "disable_auto_failback": false, 00:25:55.323 "fast_io_fail_timeout_sec": 0, 00:25:55.323 "generate_uuids": false, 00:25:55.323 "high_priority_weight": 0, 00:25:55.323 "io_path_stat": false, 00:25:55.323 "io_queue_requests": 0, 00:25:55.323 "keep_alive_timeout_ms": 10000, 00:25:55.323 "low_priority_weight": 0, 00:25:55.323 "medium_priority_weight": 0, 00:25:55.323 "nvme_adminq_poll_period_us": 10000, 00:25:55.323 "nvme_error_stat": false, 00:25:55.323 "nvme_ioq_poll_period_us": 0, 00:25:55.323 "rdma_cm_event_timeout_ms": 0, 00:25:55.323 "rdma_max_cq_size": 0, 00:25:55.323 "rdma_srq_size": 0, 00:25:55.323 "reconnect_delay_sec": 0, 00:25:55.323 "timeout_admin_us": 0, 00:25:55.323 "timeout_us": 0, 00:25:55.323 "transport_ack_timeout": 0, 00:25:55.323 "transport_retry_count": 4, 00:25:55.323 "transport_tos": 0 00:25:55.323 } 00:25:55.323 }, 00:25:55.323 { 00:25:55.323 "method": "bdev_nvme_set_hotplug", 00:25:55.323 "params": { 00:25:55.323 "enable": false, 00:25:55.323 "period_us": 100000 00:25:55.323 } 00:25:55.323 }, 00:25:55.323 { 00:25:55.323 "method": "bdev_malloc_create", 00:25:55.323 "params": { 00:25:55.323 "block_size": 4096, 00:25:55.323 "name": "malloc0", 00:25:55.323 "num_blocks": 8192, 00:25:55.323 "optimal_io_boundary": 0, 00:25:55.323 "physical_block_size": 4096, 00:25:55.323 "uuid": "805d76af-706e-4c0e-9a87-38c4d3a87dcd" 00:25:55.323 } 00:25:55.323 }, 00:25:55.323 { 00:25:55.323 "method": "bdev_wait_for_examine" 00:25:55.323 } 00:25:55.323 ] 00:25:55.323 }, 00:25:55.323 { 00:25:55.323 "subsystem": "nbd", 00:25:55.323 "config": [] 00:25:55.323 }, 00:25:55.323 { 00:25:55.323 "subsystem": "scheduler", 00:25:55.323 "config": [ 00:25:55.323 { 00:25:55.323 "method": "framework_set_scheduler", 00:25:55.323 "params": { 00:25:55.323 "name": "static" 00:25:55.323 } 00:25:55.323 } 00:25:55.323 ] 00:25:55.323 }, 00:25:55.323 { 00:25:55.323 "subsystem": "nvmf", 00:25:55.323 "config": [ 00:25:55.323 { 00:25:55.323 "method": "nvmf_set_config", 00:25:55.323 "params": { 00:25:55.323 "admin_cmd_passthru": { 00:25:55.323 "identify_ctrlr": false 00:25:55.323 }, 00:25:55.323 "discovery_filter": "match_any" 00:25:55.323 } 00:25:55.323 }, 00:25:55.323 { 00:25:55.323 "method": "nvmf_set_max_subsystems", 00:25:55.323 "params": { 00:25:55.323 "max_subsystems": 1024 00:25:55.323 } 00:25:55.323 }, 00:25:55.323 { 00:25:55.323 "method": "nvmf_set_crdt", 00:25:55.323 "params": { 00:25:55.323 "crdt1": 0, 00:25:55.323 "crdt2": 0, 00:25:55.323 "crdt3": 0 00:25:55.323 } 00:25:55.323 }, 00:25:55.323 { 00:25:55.323 "method": "nvmf_create_transport", 00:25:55.323 "params": { 00:25:55.323 "abort_timeout_sec": 1, 00:25:55.323 "ack_timeout": 0, 00:25:55.323 "buf_cache_size": 4294967295, 00:25:55.323 "c2h_success": false, 00:25:55.323 "data_wr_pool_size": 0, 00:25:55.323 "dif_insert_or_strip": false, 00:25:55.323 "in_capsule_data_size": 4096, 00:25:55.323 "io_unit_size": 131072, 00:25:55.323 "max_aq_depth": 128, 00:25:55.323 "max_io_qpairs_per_ctrlr": 127, 00:25:55.323 "max_io_size": 131072, 00:25:55.323 "max_queue_depth": 128, 00:25:55.323 "num_shared_buffers": 511, 00:25:55.323 "sock_priority": 0, 00:25:55.323 "trtype": "TCP", 00:25:55.323 "zcopy": false 00:25:55.323 } 00:25:55.323 }, 00:25:55.323 { 00:25:55.323 "method": "nvmf_create_subsystem", 00:25:55.323 "params": { 00:25:55.323 "allow_any_host": false, 00:25:55.323 "ana_reporting": false, 00:25:55.323 "max_cntlid": 65519, 00:25:55.323 "max_namespaces": 32, 00:25:55.323 "min_cntlid": 1, 00:25:55.323 "model_number": "SPDK bdev Controller", 00:25:55.323 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:55.323 "serial_number": "00000000000000000000" 00:25:55.323 } 00:25:55.323 }, 00:25:55.323 { 00:25:55.323 "method": "nvmf_subsystem_add_host", 00:25:55.323 "params": { 00:25:55.323 "host": "nqn.2016-06.io.spdk:host1", 00:25:55.323 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:55.323 "psk": "key0" 00:25:55.323 } 00:25:55.323 }, 00:25:55.323 { 00:25:55.323 "method": "nvmf_subsystem_add_ns", 00:25:55.323 "params": { 00:25:55.323 "namespace": { 00:25:55.323 "bdev_name": "malloc0", 00:25:55.323 "nguid": "805D76AF706E4C0E9A8738C4D3A87DCD", 00:25:55.323 "no_auto_visible": false, 00:25:55.323 "nsid": 1, 00:25:55.323 "uuid": "805d76af-706e-4c0e-9a87-38c4d3a87dcd" 00:25:55.323 }, 00:25:55.323 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:25:55.323 } 00:25:55.323 }, 00:25:55.323 { 00:25:55.323 "method": "nvmf_subsystem_add_listener", 00:25:55.323 "params": { 00:25:55.323 "listen_address": { 00:25:55.323 "adrfam": "IPv4", 00:25:55.323 "traddr": "10.0.0.2", 00:25:55.323 "trsvcid": "4420", 00:25:55.323 "trtype": "TCP" 00:25:55.323 }, 00:25:55.323 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:55.323 "secure_channel": true 00:25:55.323 } 00:25:55.323 } 00:25:55.323 ] 00:25:55.323 } 00:25:55.323 ] 00:25:55.323 }' 00:25:55.323 10:07:08 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:25:55.583 10:07:08 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:25:55.583 "subsystems": [ 00:25:55.583 { 00:25:55.583 "subsystem": "keyring", 00:25:55.583 "config": [ 00:25:55.583 { 00:25:55.583 "method": "keyring_file_add_key", 00:25:55.583 "params": { 00:25:55.583 "name": "key0", 00:25:55.583 "path": "/tmp/tmp.9pVHJ6M57s" 00:25:55.583 } 00:25:55.583 } 00:25:55.583 ] 00:25:55.583 }, 00:25:55.583 { 00:25:55.583 "subsystem": "iobuf", 00:25:55.583 "config": [ 00:25:55.583 { 00:25:55.583 "method": "iobuf_set_options", 00:25:55.583 "params": { 00:25:55.583 "large_bufsize": 135168, 00:25:55.583 "large_pool_count": 1024, 00:25:55.583 "small_bufsize": 8192, 00:25:55.583 "small_pool_count": 8192 00:25:55.583 } 00:25:55.583 } 00:25:55.583 ] 00:25:55.583 }, 00:25:55.583 { 00:25:55.583 "subsystem": "sock", 00:25:55.583 "config": [ 00:25:55.583 { 00:25:55.583 "method": "sock_set_default_impl", 00:25:55.583 "params": { 00:25:55.583 "impl_name": "posix" 00:25:55.583 } 00:25:55.583 }, 00:25:55.583 { 00:25:55.583 "method": "sock_impl_set_options", 00:25:55.583 "params": { 00:25:55.583 "enable_ktls": false, 00:25:55.583 "enable_placement_id": 0, 00:25:55.583 "enable_quickack": false, 00:25:55.583 "enable_recv_pipe": true, 00:25:55.583 "enable_zerocopy_send_client": false, 00:25:55.583 "enable_zerocopy_send_server": true, 00:25:55.583 "impl_name": "ssl", 00:25:55.583 "recv_buf_size": 4096, 00:25:55.583 "send_buf_size": 4096, 00:25:55.583 "tls_version": 0, 00:25:55.583 "zerocopy_threshold": 0 00:25:55.583 } 00:25:55.583 }, 00:25:55.583 { 00:25:55.583 "method": "sock_impl_set_options", 00:25:55.583 "params": { 00:25:55.583 "enable_ktls": false, 00:25:55.583 "enable_placement_id": 0, 00:25:55.583 "enable_quickack": false, 00:25:55.583 "enable_recv_pipe": true, 00:25:55.583 "enable_zerocopy_send_client": false, 00:25:55.583 "enable_zerocopy_send_server": true, 00:25:55.583 "impl_name": "posix", 00:25:55.583 "recv_buf_size": 2097152, 00:25:55.583 "send_buf_size": 2097152, 00:25:55.583 "tls_version": 0, 00:25:55.583 "zerocopy_threshold": 0 00:25:55.583 } 00:25:55.583 } 00:25:55.583 ] 00:25:55.583 }, 00:25:55.583 { 00:25:55.583 "subsystem": "vmd", 00:25:55.583 "config": [] 00:25:55.583 }, 00:25:55.583 { 00:25:55.583 "subsystem": "accel", 00:25:55.583 "config": [ 00:25:55.583 { 00:25:55.583 "method": "accel_set_options", 00:25:55.583 "params": { 00:25:55.583 "buf_count": 2048, 00:25:55.583 "large_cache_size": 16, 00:25:55.583 "sequence_count": 2048, 00:25:55.583 "small_cache_size": 128, 00:25:55.583 "task_count": 2048 00:25:55.583 } 00:25:55.583 } 00:25:55.583 ] 00:25:55.583 }, 00:25:55.583 { 00:25:55.583 "subsystem": "bdev", 00:25:55.583 "config": [ 00:25:55.583 { 00:25:55.583 "method": "bdev_set_options", 00:25:55.583 "params": { 00:25:55.583 "bdev_auto_examine": true, 00:25:55.583 "bdev_io_cache_size": 256, 00:25:55.583 "bdev_io_pool_size": 65535, 00:25:55.583 "iobuf_large_cache_size": 16, 00:25:55.583 "iobuf_small_cache_size": 128 00:25:55.583 } 00:25:55.583 }, 00:25:55.583 { 00:25:55.583 "method": "bdev_raid_set_options", 00:25:55.583 "params": { 00:25:55.583 "process_window_size_kb": 1024 00:25:55.583 } 00:25:55.583 }, 00:25:55.583 { 00:25:55.583 "method": "bdev_iscsi_set_options", 00:25:55.583 "params": { 00:25:55.583 "timeout_sec": 30 00:25:55.583 } 00:25:55.583 }, 00:25:55.583 { 00:25:55.583 "method": "bdev_nvme_set_options", 00:25:55.583 "params": { 00:25:55.583 "action_on_timeout": "none", 00:25:55.583 "allow_accel_sequence": false, 00:25:55.583 "arbitration_burst": 0, 00:25:55.583 "bdev_retry_count": 3, 00:25:55.583 "ctrlr_loss_timeout_sec": 0, 00:25:55.583 "delay_cmd_submit": true, 00:25:55.583 "dhchap_dhgroups": [ 00:25:55.583 "null", 00:25:55.583 "ffdhe2048", 00:25:55.583 "ffdhe3072", 00:25:55.583 "ffdhe4096", 00:25:55.583 "ffdhe6144", 00:25:55.583 "ffdhe8192" 00:25:55.583 ], 00:25:55.583 "dhchap_digests": [ 00:25:55.583 "sha256", 00:25:55.583 "sha384", 00:25:55.583 "sha512" 00:25:55.583 ], 00:25:55.583 "disable_auto_failback": false, 00:25:55.583 "fast_io_fail_timeout_sec": 0, 00:25:55.583 "generate_uuids": false, 00:25:55.583 "high_priority_weight": 0, 00:25:55.584 "io_path_stat": false, 00:25:55.584 "io_queue_requests": 512, 00:25:55.584 "keep_alive_timeout_ms": 10000, 00:25:55.584 "low_priority_weight": 0, 00:25:55.584 "medium_priority_weight": 0, 00:25:55.584 "nvme_adminq_poll_period_us": 10000, 00:25:55.584 "nvme_error_stat": false, 00:25:55.584 "nvme_ioq_poll_period_us": 0, 00:25:55.584 "rdma_cm_event_timeout_ms": 0, 00:25:55.584 "rdma_max_cq_size": 0, 00:25:55.584 "rdma_srq_size": 0, 00:25:55.584 "reconnect_delay_sec": 0, 00:25:55.584 "timeout_admin_us": 0, 00:25:55.584 "timeout_us": 0, 00:25:55.584 "transport_ack_timeout": 0, 00:25:55.584 "transport_retry_count": 4, 00:25:55.584 "transport_tos": 0 00:25:55.584 } 00:25:55.584 }, 00:25:55.584 { 00:25:55.584 "method": "bdev_nvme_attach_controller", 00:25:55.584 "params": { 00:25:55.584 "adrfam": "IPv4", 00:25:55.584 "ctrlr_loss_timeout_sec": 0, 00:25:55.584 "ddgst": false, 00:25:55.584 "fast_io_fail_timeout_sec": 0, 00:25:55.584 "hdgst": false, 00:25:55.584 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:55.584 "name": "nvme0", 00:25:55.584 "prchk_guard": false, 00:25:55.584 "prchk_reftag": false, 00:25:55.584 "psk": "key0", 00:25:55.584 "reconnect_delay_sec": 0, 00:25:55.584 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:55.584 "traddr": "10.0.0.2", 00:25:55.584 "trsvcid": "4420", 00:25:55.584 "trtype": "TCP" 00:25:55.584 } 00:25:55.584 }, 00:25:55.584 { 00:25:55.584 "method": "bdev_nvme_set_hotplug", 00:25:55.584 "params": { 00:25:55.584 "enable": false, 00:25:55.584 "period_us": 100000 00:25:55.584 } 00:25:55.584 }, 00:25:55.584 { 00:25:55.584 "method": "bdev_enable_histogram", 00:25:55.584 "params": { 00:25:55.584 "enable": true, 00:25:55.584 "name": "nvme0n1" 00:25:55.584 } 00:25:55.584 }, 00:25:55.584 { 00:25:55.584 "method": "bdev_wait_for_examine" 00:25:55.584 } 00:25:55.584 ] 00:25:55.584 }, 00:25:55.584 { 00:25:55.584 "subsystem": "nbd", 00:25:55.584 "config": [] 00:25:55.584 } 00:25:55.584 ] 00:25:55.584 }' 00:25:55.584 10:07:08 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 85059 00:25:55.584 10:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85059 ']' 00:25:55.584 10:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85059 00:25:55.584 10:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:55.584 10:07:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:55.584 10:07:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85059 00:25:55.584 10:07:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:55.584 10:07:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:55.584 killing process with pid 85059 00:25:55.584 10:07:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85059' 00:25:55.584 10:07:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85059 00:25:55.584 Received shutdown signal, test time was about 1.000000 seconds 00:25:55.584 00:25:55.584 Latency(us) 00:25:55.584 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:55.584 =================================================================================================================== 00:25:55.584 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:55.584 10:07:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85059 00:25:55.843 10:07:09 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 85009 00:25:55.843 10:07:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85009 ']' 00:25:55.843 10:07:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85009 00:25:55.843 10:07:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:55.843 10:07:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:55.843 10:07:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85009 00:25:55.843 killing process with pid 85009 00:25:55.843 10:07:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:55.843 10:07:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:55.843 10:07:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85009' 00:25:55.844 10:07:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85009 00:25:55.844 10:07:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85009 00:25:56.103 10:07:09 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:25:56.103 10:07:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:56.103 10:07:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:56.103 10:07:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:56.103 10:07:09 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:25:56.103 "subsystems": [ 00:25:56.103 { 00:25:56.103 "subsystem": "keyring", 00:25:56.103 "config": [ 00:25:56.103 { 00:25:56.103 "method": "keyring_file_add_key", 00:25:56.103 "params": { 00:25:56.103 "name": "key0", 00:25:56.103 "path": "/tmp/tmp.9pVHJ6M57s" 00:25:56.103 } 00:25:56.103 } 00:25:56.103 ] 00:25:56.103 }, 00:25:56.103 { 00:25:56.103 "subsystem": "iobuf", 00:25:56.103 "config": [ 00:25:56.103 { 00:25:56.103 "method": "iobuf_set_options", 00:25:56.103 "params": { 00:25:56.103 "large_bufsize": 135168, 00:25:56.103 "large_pool_count": 1024, 00:25:56.103 "small_bufsize": 8192, 00:25:56.103 "small_pool_count": 8192 00:25:56.103 } 00:25:56.103 } 00:25:56.103 ] 00:25:56.103 }, 00:25:56.103 { 00:25:56.103 "subsystem": "sock", 00:25:56.103 "config": [ 00:25:56.103 { 00:25:56.103 "method": "sock_set_default_impl", 00:25:56.103 "params": { 00:25:56.103 "impl_name": "posix" 00:25:56.103 } 00:25:56.103 }, 00:25:56.103 { 00:25:56.103 "method": "sock_impl_set_options", 00:25:56.103 "params": { 00:25:56.103 "enable_ktls": false, 00:25:56.103 "enable_placement_id": 0, 00:25:56.103 "enable_quickack": false, 00:25:56.103 "enable_recv_pipe": true, 00:25:56.103 "enable_zerocopy_send_client": false, 00:25:56.103 "enable_zerocopy_send_server": true, 00:25:56.103 "impl_name": "ssl", 00:25:56.103 "recv_buf_size": 4096, 00:25:56.103 "send_buf_size": 4096, 00:25:56.103 "tls_version": 0, 00:25:56.103 "zerocopy_threshold": 0 00:25:56.103 } 00:25:56.103 }, 00:25:56.103 { 00:25:56.103 "method": "sock_impl_set_options", 00:25:56.103 "params": { 00:25:56.103 "enable_ktls": false, 00:25:56.103 "enable_placement_id": 0, 00:25:56.103 "enable_quickack": false, 00:25:56.103 "enable_recv_pipe": true, 00:25:56.103 "enable_zerocopy_send_client": false, 00:25:56.103 "enable_zerocopy_send_server": true, 00:25:56.103 "impl_name": "posix", 00:25:56.103 "recv_buf_size": 2097152, 00:25:56.103 "send_buf_size": 2097152, 00:25:56.103 "tls_version": 0, 00:25:56.103 "zerocopy_threshold": 0 00:25:56.103 } 00:25:56.103 } 00:25:56.103 ] 00:25:56.103 }, 00:25:56.103 { 00:25:56.103 "subsystem": "vmd", 00:25:56.103 "config": [] 00:25:56.103 }, 00:25:56.103 { 00:25:56.103 "subsystem": "accel", 00:25:56.103 "config": [ 00:25:56.103 { 00:25:56.103 "method": "accel_set_options", 00:25:56.103 "params": { 00:25:56.103 "buf_count": 2048, 00:25:56.103 "large_cache_size": 16, 00:25:56.103 "sequence_count": 2048, 00:25:56.103 "small_cache_size": 128, 00:25:56.103 "task_count": 2048 00:25:56.103 } 00:25:56.103 } 00:25:56.103 ] 00:25:56.103 }, 00:25:56.103 { 00:25:56.103 "subsystem": "bdev", 00:25:56.103 "config": [ 00:25:56.103 { 00:25:56.103 "method": "bdev_set_options", 00:25:56.103 "params": { 00:25:56.103 "bdev_auto_examine": true, 00:25:56.103 "bdev_io_cache_size": 256, 00:25:56.103 "bdev_io_pool_size": 65535, 00:25:56.103 "iobuf_large_cache_size": 16, 00:25:56.103 "iobuf_small_cache_size": 128 00:25:56.103 } 00:25:56.103 }, 00:25:56.103 { 00:25:56.103 "method": "bdev_raid_set_options", 00:25:56.103 "params": { 00:25:56.103 "process_window_size_kb": 1024 00:25:56.103 } 00:25:56.103 }, 00:25:56.103 { 00:25:56.103 "method": "bdev_iscsi_set_options", 00:25:56.103 "params": { 00:25:56.103 "timeout_sec": 30 00:25:56.103 } 00:25:56.104 }, 00:25:56.104 { 00:25:56.104 "method": "bdev_nvme_set_options", 00:25:56.104 "params": { 00:25:56.104 "action_on_timeout": "none", 00:25:56.104 "allow_accel_sequence": false, 00:25:56.104 "arbitration_burst": 0, 00:25:56.104 "bdev_retry_count": 3, 00:25:56.104 "ctrlr_loss_timeout_sec": 0, 00:25:56.104 "delay_cmd_submit": true, 00:25:56.104 "dhchap_dhgroups": [ 00:25:56.104 "null", 00:25:56.104 "ffdhe2048", 00:25:56.104 "ffdhe3072", 00:25:56.104 "ffdhe4096", 00:25:56.104 "ffdhe6144", 00:25:56.104 "ffdhe8192" 00:25:56.104 ], 00:25:56.104 "dhchap_digests": [ 00:25:56.104 "sha256", 00:25:56.104 "sha384", 00:25:56.104 "sha512" 00:25:56.104 ], 00:25:56.104 "disable_auto_failback": false, 00:25:56.104 "fast_io_fail_timeout_sec": 0, 00:25:56.104 "generate_uuids": false, 00:25:56.104 "high_priority_weight": 0, 00:25:56.104 "io_path_stat": false, 00:25:56.104 "io_queue_requests": 0, 00:25:56.104 "keep_alive_timeout_ms": 10000, 00:25:56.104 "low_priority_weight": 0, 00:25:56.104 "medium_priority_weight": 0, 00:25:56.104 "nvme_adminq_poll_period_us": 10000, 00:25:56.104 "nvme_error_stat": false, 00:25:56.104 "nvme_ioq_poll_period_us": 0, 00:25:56.104 "rdma_cm_event_timeout_ms": 0, 00:25:56.104 "rdma_max_cq_size": 0, 00:25:56.104 "rdma_srq_size": 0, 00:25:56.104 "reconnect_delay_sec": 0, 00:25:56.104 "timeout_admin_us": 0, 00:25:56.104 "timeout_us": 0, 00:25:56.104 "transport_ack_timeout": 0, 00:25:56.104 "transport_retry_count": 4, 00:25:56.104 "transport_tos": 0 00:25:56.104 } 00:25:56.104 }, 00:25:56.104 { 00:25:56.104 "method": "bdev_nvme_set_hotplug", 00:25:56.104 "params": { 00:25:56.104 "enable": false, 00:25:56.104 "period_us": 100000 00:25:56.104 } 00:25:56.104 }, 00:25:56.104 { 00:25:56.104 "method": "bdev_malloc_create", 00:25:56.104 "params": { 00:25:56.104 "block_size": 4096, 00:25:56.104 "name": "malloc0", 00:25:56.104 "num_blocks": 8192, 00:25:56.104 "optimal_io_boundary": 0, 00:25:56.104 "physical_block_size": 4096, 00:25:56.104 "uuid": "805d76af-706e-4c0e-9a87-38c4d3a87dcd" 00:25:56.104 } 00:25:56.104 }, 00:25:56.104 { 00:25:56.104 "method": "bdev_wait_for_examine" 00:25:56.104 } 00:25:56.104 ] 00:25:56.104 }, 00:25:56.104 { 00:25:56.104 "subsystem": "nbd", 00:25:56.104 "config": [] 00:25:56.104 }, 00:25:56.104 { 00:25:56.104 "subsystem": "scheduler", 00:25:56.104 "config": [ 00:25:56.104 { 00:25:56.104 "method": "framework_set_scheduler", 00:25:56.104 "params": { 00:25:56.104 "name": "static" 00:25:56.104 } 00:25:56.104 } 00:25:56.104 ] 00:25:56.104 }, 00:25:56.104 { 00:25:56.104 "subsystem": "nvmf", 00:25:56.104 "config": [ 00:25:56.104 { 00:25:56.104 "method": "nvmf_set_config", 00:25:56.104 "params": { 00:25:56.104 "admin_cmd_passthru": { 00:25:56.104 "identify_ctrlr": false 00:25:56.104 }, 00:25:56.104 "discovery_filter": "match_any" 00:25:56.104 } 00:25:56.104 }, 00:25:56.104 { 00:25:56.104 "method": "nvmf_set_max_subsystems", 00:25:56.104 "params": { 00:25:56.104 "max_subsystems": 1024 00:25:56.104 } 00:25:56.104 }, 00:25:56.104 { 00:25:56.104 "method": "nvmf_set_crdt", 00:25:56.104 "params": { 00:25:56.104 "crdt1": 0, 00:25:56.104 "crdt2": 0, 00:25:56.104 "crdt3": 0 00:25:56.104 } 00:25:56.104 }, 00:25:56.104 { 00:25:56.104 "method": "nvmf_create_transport", 00:25:56.104 "params": { 00:25:56.104 "abort_timeout_sec": 1, 00:25:56.104 "ack_timeout": 0, 00:25:56.104 "buf_cache_size": 4294967295, 00:25:56.104 "c2h_success": false, 00:25:56.104 "data_wr_pool_size": 0, 00:25:56.104 "dif_insert_or_strip": false, 00:25:56.104 "in_capsule_data_size": 4096, 00:25:56.104 "io_unit_size": 131072, 00:25:56.104 "max_aq_depth": 128, 00:25:56.104 "max_io_qpairs_per_ctrlr": 127, 00:25:56.104 "max_io_size": 131072, 00:25:56.104 "max_queue_depth": 128, 00:25:56.104 "num_shared_buffers": 511, 00:25:56.104 "sock_priority": 0, 00:25:56.104 "trtype": "TCP", 00:25:56.104 "zcopy": false 00:25:56.104 } 00:25:56.104 }, 00:25:56.104 { 00:25:56.104 "method": "nvmf_create_subsystem", 00:25:56.104 "params": { 00:25:56.104 "allow_any_host": false, 00:25:56.104 "ana_reporting": false, 00:25:56.104 "max_cntlid": 65519, 00:25:56.104 "max_namespaces": 32, 00:25:56.104 "min_cntlid": 1, 00:25:56.104 "model_number": "SPDK bdev Controller", 00:25:56.104 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:56.104 "serial_number": "00000000000000000000" 00:25:56.104 } 00:25:56.104 }, 00:25:56.104 { 00:25:56.104 "method": "nvmf_subsystem_add_host", 00:25:56.104 "params": { 00:25:56.104 "host": "nqn.2016-06.io.spdk:host1", 00:25:56.104 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:56.104 "psk": "key0" 00:25:56.104 } 00:25:56.104 }, 00:25:56.104 { 00:25:56.104 "method": "nvmf_subsystem_add_ns", 00:25:56.104 "params": { 00:25:56.104 "namespace": { 00:25:56.104 "bdev_name": "malloc0", 00:25:56.104 "nguid": "805D76AF706E4C0E9A8738C4D3A87DCD", 00:25:56.104 "no_auto_visible": false, 00:25:56.104 "nsid": 1, 00:25:56.104 "uuid": "805d76af-706e-4c0e-9a87-38c4d3a87dcd" 00:25:56.104 }, 00:25:56.104 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:25:56.104 } 00:25:56.104 }, 00:25:56.104 { 00:25:56.104 "method": "nvmf_subsystem_add_listener", 00:25:56.104 "params": { 00:25:56.104 "listen_address": { 00:25:56.104 "adrfam": "IPv4", 00:25:56.104 "traddr": "10.0.0.2", 00:25:56.104 "trsvcid": "4420", 00:25:56.104 "trtype": "TCP" 00:25:56.104 }, 00:25:56.104 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:56.104 "secure_channel": true 00:25:56.104 } 00:25:56.104 } 00:25:56.104 ] 00:25:56.104 } 00:25:56.104 ] 00:25:56.104 }' 00:25:56.104 10:07:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=85144 00:25:56.104 10:07:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 85144 00:25:56.104 10:07:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85144 ']' 00:25:56.104 10:07:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:56.104 10:07:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:56.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:56.104 10:07:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:56.104 10:07:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:56.104 10:07:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:56.104 10:07:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:25:56.104 [2024-07-15 10:07:09.512493] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:25:56.104 [2024-07-15 10:07:09.512558] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:56.104 [2024-07-15 10:07:09.648073] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:56.363 [2024-07-15 10:07:09.751105] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:56.363 [2024-07-15 10:07:09.751155] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:56.363 [2024-07-15 10:07:09.751162] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:56.363 [2024-07-15 10:07:09.751167] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:56.363 [2024-07-15 10:07:09.751171] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:56.363 [2024-07-15 10:07:09.751240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:56.621 [2024-07-15 10:07:09.964826] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:56.621 [2024-07-15 10:07:09.996719] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:56.621 [2024-07-15 10:07:09.996927] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:56.880 10:07:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:56.881 10:07:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:56.881 10:07:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:56.881 10:07:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:56.881 10:07:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:56.881 10:07:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:56.881 10:07:10 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=85188 00:25:56.881 10:07:10 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 85188 /var/tmp/bdevperf.sock 00:25:56.881 10:07:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85188 ']' 00:25:56.881 10:07:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:56.881 10:07:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:56.881 10:07:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:56.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:56.881 10:07:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:56.881 10:07:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:56.881 10:07:10 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:25:56.881 10:07:10 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:25:56.881 "subsystems": [ 00:25:56.881 { 00:25:56.881 "subsystem": "keyring", 00:25:56.881 "config": [ 00:25:56.881 { 00:25:56.881 "method": "keyring_file_add_key", 00:25:56.881 "params": { 00:25:56.881 "name": "key0", 00:25:56.881 "path": "/tmp/tmp.9pVHJ6M57s" 00:25:56.881 } 00:25:56.881 } 00:25:56.881 ] 00:25:56.881 }, 00:25:56.881 { 00:25:56.881 "subsystem": "iobuf", 00:25:56.881 "config": [ 00:25:56.881 { 00:25:56.881 "method": "iobuf_set_options", 00:25:56.881 "params": { 00:25:56.881 "large_bufsize": 135168, 00:25:56.881 "large_pool_count": 1024, 00:25:56.881 "small_bufsize": 8192, 00:25:56.881 "small_pool_count": 8192 00:25:56.881 } 00:25:56.881 } 00:25:56.881 ] 00:25:56.881 }, 00:25:56.881 { 00:25:56.881 "subsystem": "sock", 00:25:56.881 "config": [ 00:25:56.881 { 00:25:56.881 "method": "sock_set_default_impl", 00:25:56.881 "params": { 00:25:56.881 "impl_name": "posix" 00:25:56.881 } 00:25:56.881 }, 00:25:56.881 { 00:25:56.881 "method": "sock_impl_set_options", 00:25:56.881 "params": { 00:25:56.881 "enable_ktls": false, 00:25:56.881 "enable_placement_id": 0, 00:25:56.881 "enable_quickack": false, 00:25:56.881 "enable_recv_pipe": true, 00:25:56.881 "enable_zerocopy_send_client": false, 00:25:56.881 "enable_zerocopy_send_server": true, 00:25:56.881 "impl_name": "ssl", 00:25:56.881 "recv_buf_size": 4096, 00:25:56.881 "send_buf_size": 4096, 00:25:56.881 "tls_version": 0, 00:25:56.881 "zerocopy_threshold": 0 00:25:56.881 } 00:25:56.881 }, 00:25:56.881 { 00:25:56.881 "method": "sock_impl_set_options", 00:25:56.881 "params": { 00:25:56.881 "enable_ktls": false, 00:25:56.881 "enable_placement_id": 0, 00:25:56.881 "enable_quickack": false, 00:25:56.881 "enable_recv_pipe": true, 00:25:56.881 "enable_zerocopy_send_client": false, 00:25:56.881 "enable_zerocopy_send_server": true, 00:25:56.881 "impl_name": "posix", 00:25:56.881 "recv_buf_size": 2097152, 00:25:56.881 "send_buf_size": 2097152, 00:25:56.881 "tls_version": 0, 00:25:56.881 "zerocopy_threshold": 0 00:25:56.881 } 00:25:56.881 } 00:25:56.881 ] 00:25:56.881 }, 00:25:56.881 { 00:25:56.881 "subsystem": "vmd", 00:25:56.881 "config": [] 00:25:56.881 }, 00:25:56.881 { 00:25:56.881 "subsystem": "accel", 00:25:56.881 "config": [ 00:25:56.881 { 00:25:56.881 "method": "accel_set_options", 00:25:56.881 "params": { 00:25:56.881 "buf_count": 2048, 00:25:56.881 "large_cache_size": 16, 00:25:56.881 "sequence_count": 2048, 00:25:56.881 "small_cache_size": 128, 00:25:56.881 "task_count": 2048 00:25:56.881 } 00:25:56.881 } 00:25:56.881 ] 00:25:56.881 }, 00:25:56.881 { 00:25:56.881 "subsystem": "bdev", 00:25:56.881 "config": [ 00:25:56.881 { 00:25:56.881 "method": "bdev_set_options", 00:25:56.881 "params": { 00:25:56.881 "bdev_auto_examine": true, 00:25:56.881 "bdev_io_cache_size": 256, 00:25:56.881 "bdev_io_pool_size": 65535, 00:25:56.881 "iobuf_large_cache_size": 16, 00:25:56.881 "iobuf_small_cache_size": 128 00:25:56.881 } 00:25:56.881 }, 00:25:56.881 { 00:25:56.881 "method": "bdev_raid_set_options", 00:25:56.881 "params": { 00:25:56.881 "process_window_size_kb": 1024 00:25:56.881 } 00:25:56.881 }, 00:25:56.881 { 00:25:56.881 "method": "bdev_iscsi_set_options", 00:25:56.881 "params": { 00:25:56.881 "timeout_sec": 30 00:25:56.881 } 00:25:56.881 }, 00:25:56.881 { 00:25:56.881 "method": "bdev_nvme_set_options", 00:25:56.881 "params": { 00:25:56.881 "action_on_timeout": "none", 00:25:56.881 "allow_accel_sequence": false, 00:25:56.881 "arbitration_burst": 0, 00:25:56.881 "bdev_retry_count": 3, 00:25:56.881 "ctrlr_loss_timeout_sec": 0, 00:25:56.881 "delay_cmd_submit": true, 00:25:56.881 "dhchap_dhgroups": [ 00:25:56.881 "null", 00:25:56.881 "ffdhe2048", 00:25:56.881 "ffdhe3072", 00:25:56.881 "ffdhe4096", 00:25:56.881 "ffdhe6144", 00:25:56.881 "ffdhe8192" 00:25:56.881 ], 00:25:56.881 "dhchap_digests": [ 00:25:56.881 "sha256", 00:25:56.881 "sha384", 00:25:56.881 "sha512" 00:25:56.881 ], 00:25:56.881 "disable_auto_failback": false, 00:25:56.881 "fast_io_fail_timeout_sec": 0, 00:25:56.881 "generate_uuids": false, 00:25:56.881 "high_priority_weight": 0, 00:25:56.881 "io_path_stat": false, 00:25:56.881 "io_queue_requests": 512, 00:25:56.881 "keep_alive_timeout_ms": 10000, 00:25:56.881 "low_priority_weight": 0, 00:25:56.881 "medium_priority_weight": 0, 00:25:56.881 "nvme_adminq_poll_period_us": 10000, 00:25:56.881 "nvme_error_stat": false, 00:25:56.881 "nvme_ioq_poll_period_us": 0, 00:25:56.881 "rdma_cm_event_timeout_ms": 0, 00:25:56.881 "rdma_max_cq_size": 0, 00:25:56.881 "rdma_srq_size": 0, 00:25:56.881 "reconnect_delay_sec": 0, 00:25:56.881 "timeout_admin_us": 0, 00:25:56.881 "timeout_us": 0, 00:25:56.881 "transport_ack_timeout": 0, 00:25:56.881 "transport_retry_count": 4, 00:25:56.881 "transport_tos": 0 00:25:56.881 } 00:25:56.881 }, 00:25:56.881 { 00:25:56.881 "method": "bdev_nvme_attach_controller", 00:25:56.881 "params": { 00:25:56.881 "adrfam": "IPv4", 00:25:56.881 "ctrlr_loss_timeout_sec": 0, 00:25:56.881 "ddgst": false, 00:25:56.881 "fast_io_fail_timeout_sec": 0, 00:25:56.881 "hdgst": false, 00:25:56.881 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:56.881 "name": "nvme0", 00:25:56.881 "prchk_guard": false, 00:25:56.881 "prchk_reftag": false, 00:25:56.881 "psk": "key0", 00:25:56.881 "reconnect_delay_sec": 0, 00:25:56.881 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:56.881 "traddr": "10.0.0.2", 00:25:56.881 "trsvcid": "4420", 00:25:56.881 "trtype": "TCP" 00:25:56.881 } 00:25:56.881 }, 00:25:56.881 { 00:25:56.881 "method": "bdev_nvme_set_hotplug", 00:25:56.881 "params": { 00:25:56.881 "enable": false, 00:25:56.881 "period_us": 100000 00:25:56.881 } 00:25:56.881 }, 00:25:56.881 { 00:25:56.881 "method": "bdev_enable_histogram", 00:25:56.881 "params": { 00:25:56.881 "enable": true, 00:25:56.881 "name": "nvme0n1" 00:25:56.881 } 00:25:56.881 }, 00:25:56.881 { 00:25:56.881 "method": "bdev_wait_for_examine" 00:25:56.881 } 00:25:56.881 ] 00:25:56.881 }, 00:25:56.881 { 00:25:56.881 "subsystem": "nbd", 00:25:56.881 "config": [] 00:25:56.881 } 00:25:56.881 ] 00:25:56.881 }' 00:25:56.881 [2024-07-15 10:07:10.450293] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:25:56.881 [2024-07-15 10:07:10.450359] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85188 ] 00:25:57.141 [2024-07-15 10:07:10.587968] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:57.141 [2024-07-15 10:07:10.689344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:57.399 [2024-07-15 10:07:10.841936] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:57.967 10:07:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:57.968 10:07:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:57.968 10:07:11 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:57.968 10:07:11 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:25:57.968 10:07:11 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.968 10:07:11 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:58.227 Running I/O for 1 seconds... 00:25:59.167 00:25:59.167 Latency(us) 00:25:59.168 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:59.168 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:59.168 Verification LBA range: start 0x0 length 0x2000 00:25:59.168 nvme0n1 : 1.01 6389.05 24.96 0.00 0.00 19892.59 4521.70 16484.16 00:25:59.168 =================================================================================================================== 00:25:59.168 Total : 6389.05 24.96 0.00 0.00 19892.59 4521.70 16484.16 00:25:59.168 0 00:25:59.168 10:07:12 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:25:59.168 10:07:12 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:25:59.168 10:07:12 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:25:59.168 10:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:25:59.168 10:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:25:59.168 10:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:25:59.168 10:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:59.168 10:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:25:59.168 10:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:25:59.168 10:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:25:59.168 10:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:59.168 nvmf_trace.0 00:25:59.168 10:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:25:59.168 10:07:12 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 85188 00:25:59.168 10:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85188 ']' 00:25:59.168 10:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85188 00:25:59.168 10:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:59.168 10:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:59.168 10:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85188 00:25:59.427 10:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:59.427 10:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:59.427 10:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85188' 00:25:59.427 killing process with pid 85188 00:25:59.427 10:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85188 00:25:59.427 Received shutdown signal, test time was about 1.000000 seconds 00:25:59.427 00:25:59.427 Latency(us) 00:25:59.427 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:59.427 =================================================================================================================== 00:25:59.427 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:59.427 10:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85188 00:25:59.427 10:07:12 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:25:59.427 10:07:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:59.427 10:07:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:25:59.427 10:07:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:59.427 10:07:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:25:59.427 10:07:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:59.427 10:07:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:59.427 rmmod nvme_tcp 00:25:59.427 rmmod nvme_fabrics 00:25:59.686 rmmod nvme_keyring 00:25:59.686 10:07:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:59.686 10:07:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:25:59.686 10:07:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:25:59.686 10:07:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 85144 ']' 00:25:59.686 10:07:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 85144 00:25:59.686 10:07:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85144 ']' 00:25:59.686 10:07:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85144 00:25:59.686 10:07:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:59.686 10:07:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:59.686 10:07:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85144 00:25:59.686 killing process with pid 85144 00:25:59.686 10:07:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:59.686 10:07:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:59.686 10:07:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85144' 00:25:59.686 10:07:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85144 00:25:59.686 10:07:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85144 00:25:59.686 10:07:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:59.686 10:07:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:59.686 10:07:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:59.686 10:07:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:59.686 10:07:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:59.686 10:07:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:59.686 10:07:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:59.686 10:07:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:59.946 10:07:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:59.946 10:07:13 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.xrzdhqAweF /tmp/tmp.TpEooliSSY /tmp/tmp.9pVHJ6M57s 00:25:59.946 00:25:59.946 real 1m20.775s 00:25:59.946 user 2m6.690s 00:25:59.946 sys 0m25.856s 00:25:59.946 10:07:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:59.946 10:07:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:59.946 ************************************ 00:25:59.946 END TEST nvmf_tls 00:25:59.946 ************************************ 00:25:59.946 10:07:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:59.946 10:07:13 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:59.946 10:07:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:59.946 10:07:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:59.946 10:07:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:59.946 ************************************ 00:25:59.946 START TEST nvmf_fips 00:25:59.946 ************************************ 00:25:59.946 10:07:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:59.946 * Looking for test storage... 00:25:59.946 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:25:59.946 10:07:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:59.946 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:25:59.946 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:59.946 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:59.946 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:59.946 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:59.946 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:59.946 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:59.946 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:59.946 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:59.946 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:59.946 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:59.946 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:25:59.946 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:25:59.946 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:59.946 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:59.946 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:59.946 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:59.946 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:59.946 10:07:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:59.946 10:07:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:59.946 10:07:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:59.946 10:07:13 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.946 10:07:13 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.946 10:07:13 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.946 10:07:13 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:25:59.946 10:07:13 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.946 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:25:59.946 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:59.946 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:59.946 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:59.946 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:59.946 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:59.946 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:59.946 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:59.946 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:59.946 10:07:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:59.946 10:07:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:25:59.946 10:07:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:25:59.946 10:07:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:25:59.946 10:07:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:26:00.208 10:07:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:00.209 10:07:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:26:00.209 10:07:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:00.209 10:07:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:26:00.209 10:07:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:00.209 10:07:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:26:00.209 10:07:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:26:00.209 10:07:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:26:00.209 Error setting digest 00:26:00.209 00725A441B7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:26:00.209 00725A441B7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:26:00.209 10:07:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:26:00.209 10:07:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:00.209 10:07:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:00.209 10:07:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:00.209 10:07:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:26:00.209 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:00.209 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:00.209 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:00.209 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:00.209 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:00.209 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:00.209 10:07:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:00.209 10:07:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:00.209 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:26:00.209 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:26:00.209 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:26:00.209 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:26:00.209 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:26:00.209 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:26:00.209 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:00.209 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:00.209 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:00.209 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:00.209 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:00.209 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:00.209 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:00.209 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:00.209 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:00.209 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:00.209 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:00.209 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:00.209 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:00.209 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:00.209 Cannot find device "nvmf_tgt_br" 00:26:00.209 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # true 00:26:00.209 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:00.209 Cannot find device "nvmf_tgt_br2" 00:26:00.209 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # true 00:26:00.209 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:00.481 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:00.481 Cannot find device "nvmf_tgt_br" 00:26:00.481 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # true 00:26:00.481 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:00.481 Cannot find device "nvmf_tgt_br2" 00:26:00.481 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # true 00:26:00.481 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:00.481 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:00.481 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:00.481 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:00.481 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # true 00:26:00.481 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:00.481 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:00.481 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # true 00:26:00.481 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:00.481 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:00.481 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:00.481 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:00.481 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:00.481 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:00.481 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:00.481 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:00.481 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:00.481 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:00.481 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:00.481 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:00.481 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:00.481 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:00.481 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:00.481 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:00.481 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:00.481 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:00.481 10:07:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:00.481 10:07:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:00.481 10:07:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:00.481 10:07:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:00.481 10:07:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:00.481 10:07:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:00.481 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:00.481 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:26:00.481 00:26:00.481 --- 10.0.0.2 ping statistics --- 00:26:00.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:00.481 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:26:00.481 10:07:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:00.481 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:00.481 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:26:00.481 00:26:00.481 --- 10.0.0.3 ping statistics --- 00:26:00.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:00.481 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:26:00.481 10:07:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:00.481 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:00.481 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:26:00.481 00:26:00.481 --- 10.0.0.1 ping statistics --- 00:26:00.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:00.481 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:26:00.481 10:07:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:00.481 10:07:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:26:00.481 10:07:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:00.481 10:07:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:00.481 10:07:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:00.481 10:07:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:00.481 10:07:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:00.481 10:07:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:00.481 10:07:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:00.759 10:07:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:26:00.759 10:07:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:00.759 10:07:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:00.759 10:07:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:00.759 10:07:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=85468 00:26:00.759 10:07:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 85468 00:26:00.759 10:07:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 85468 ']' 00:26:00.759 10:07:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:00.759 10:07:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:00.759 10:07:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:00.759 10:07:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:00.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:00.759 10:07:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:00.759 10:07:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:00.759 [2024-07-15 10:07:14.150493] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:26:00.759 [2024-07-15 10:07:14.150568] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:00.759 [2024-07-15 10:07:14.275886] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:01.019 [2024-07-15 10:07:14.379578] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:01.019 [2024-07-15 10:07:14.379621] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:01.019 [2024-07-15 10:07:14.379628] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:01.019 [2024-07-15 10:07:14.379633] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:01.019 [2024-07-15 10:07:14.379637] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:01.019 [2024-07-15 10:07:14.379655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:01.589 10:07:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:01.589 10:07:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:26:01.589 10:07:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:01.589 10:07:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:01.589 10:07:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:01.589 10:07:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:01.589 10:07:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:26:01.589 10:07:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:26:01.589 10:07:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:26:01.589 10:07:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:26:01.589 10:07:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:26:01.589 10:07:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:26:01.589 10:07:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:26:01.589 10:07:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:01.849 [2024-07-15 10:07:15.218891] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:01.849 [2024-07-15 10:07:15.234771] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:01.849 [2024-07-15 10:07:15.234929] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:01.849 [2024-07-15 10:07:15.263267] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:26:01.849 malloc0 00:26:01.849 10:07:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:01.849 10:07:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=85526 00:26:01.849 10:07:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:26:01.849 10:07:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 85526 /var/tmp/bdevperf.sock 00:26:01.849 10:07:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 85526 ']' 00:26:01.849 10:07:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:01.849 10:07:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:01.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:01.849 10:07:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:01.849 10:07:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:01.849 10:07:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:01.849 [2024-07-15 10:07:15.371203] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:26:01.849 [2024-07-15 10:07:15.371287] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85526 ] 00:26:02.108 [2024-07-15 10:07:15.507276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:02.108 [2024-07-15 10:07:15.612282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:02.676 10:07:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:02.676 10:07:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:26:02.676 10:07:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:26:02.933 [2024-07-15 10:07:16.386003] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:02.933 [2024-07-15 10:07:16.386092] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:26:02.933 TLSTESTn1 00:26:02.933 10:07:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:03.190 Running I/O for 10 seconds... 00:26:13.167 00:26:13.167 Latency(us) 00:26:13.167 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:13.167 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:13.167 Verification LBA range: start 0x0 length 0x2000 00:26:13.167 TLSTESTn1 : 10.01 6290.67 24.57 0.00 0.00 20313.61 4550.32 19918.37 00:26:13.167 =================================================================================================================== 00:26:13.167 Total : 6290.67 24.57 0.00 0.00 20313.61 4550.32 19918.37 00:26:13.167 0 00:26:13.167 10:07:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:26:13.167 10:07:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:26:13.167 10:07:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:26:13.167 10:07:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:26:13.167 10:07:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:26:13.167 10:07:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:26:13.167 10:07:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:26:13.167 10:07:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:26:13.167 10:07:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:26:13.167 10:07:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:26:13.167 nvmf_trace.0 00:26:13.167 10:07:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:26:13.167 10:07:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 85526 00:26:13.167 10:07:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 85526 ']' 00:26:13.167 10:07:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 85526 00:26:13.167 10:07:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:26:13.167 10:07:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:13.167 10:07:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85526 00:26:13.167 10:07:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:26:13.167 10:07:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:26:13.167 10:07:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85526' 00:26:13.167 killing process with pid 85526 00:26:13.167 10:07:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 85526 00:26:13.167 Received shutdown signal, test time was about 10.000000 seconds 00:26:13.167 00:26:13.167 Latency(us) 00:26:13.167 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:13.167 =================================================================================================================== 00:26:13.167 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:13.167 [2024-07-15 10:07:26.730576] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:26:13.167 10:07:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 85526 00:26:13.426 10:07:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:26:13.426 10:07:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:13.426 10:07:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:26:13.426 10:07:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:13.426 10:07:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:26:13.427 10:07:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:13.427 10:07:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:13.427 rmmod nvme_tcp 00:26:13.427 rmmod nvme_fabrics 00:26:13.427 rmmod nvme_keyring 00:26:13.686 10:07:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:13.686 10:07:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:26:13.686 10:07:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:26:13.686 10:07:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 85468 ']' 00:26:13.686 10:07:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 85468 00:26:13.686 10:07:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 85468 ']' 00:26:13.686 10:07:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 85468 00:26:13.686 10:07:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:26:13.686 10:07:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:13.686 10:07:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85468 00:26:13.686 10:07:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:13.686 10:07:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:13.686 10:07:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85468' 00:26:13.686 killing process with pid 85468 00:26:13.686 10:07:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 85468 00:26:13.686 [2024-07-15 10:07:27.055748] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:26:13.686 10:07:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 85468 00:26:13.686 10:07:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:13.686 10:07:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:13.686 10:07:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:13.686 10:07:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:13.686 10:07:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:13.686 10:07:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:13.686 10:07:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:13.686 10:07:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:13.945 10:07:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:26:13.945 10:07:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:26:13.945 00:26:13.945 real 0m13.949s 00:26:13.945 user 0m19.225s 00:26:13.945 sys 0m5.280s 00:26:13.945 10:07:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:13.945 10:07:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:13.945 ************************************ 00:26:13.945 END TEST nvmf_fips 00:26:13.945 ************************************ 00:26:13.945 10:07:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:13.945 10:07:27 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:26:13.945 10:07:27 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ virt == phy ]] 00:26:13.945 10:07:27 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:26:13.945 10:07:27 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:13.945 10:07:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:13.945 10:07:27 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:26:13.945 10:07:27 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:13.945 10:07:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:13.945 10:07:27 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:26:13.945 10:07:27 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:26:13.945 10:07:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:13.945 10:07:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:13.945 10:07:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:13.945 ************************************ 00:26:13.945 START TEST nvmf_multicontroller 00:26:13.945 ************************************ 00:26:13.945 10:07:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:26:14.205 * Looking for test storage... 00:26:14.205 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@432 -- # nvmf_veth_init 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:14.205 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:14.206 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:14.206 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:14.206 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:14.206 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:14.206 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:14.206 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:14.206 Cannot find device "nvmf_tgt_br" 00:26:14.206 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@155 -- # true 00:26:14.206 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:14.206 Cannot find device "nvmf_tgt_br2" 00:26:14.206 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@156 -- # true 00:26:14.206 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:14.206 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:14.206 Cannot find device "nvmf_tgt_br" 00:26:14.206 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@158 -- # true 00:26:14.206 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:14.206 Cannot find device "nvmf_tgt_br2" 00:26:14.206 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@159 -- # true 00:26:14.206 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:14.206 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:14.206 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:14.206 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:14.206 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@162 -- # true 00:26:14.206 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:14.206 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:14.206 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@163 -- # true 00:26:14.206 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:14.206 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:14.206 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:14.465 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:14.465 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:14.465 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:14.465 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:14.465 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:14.465 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:14.465 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:14.465 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:14.465 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:14.465 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:14.465 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:14.465 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:14.465 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:14.465 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:14.465 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:14.465 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:14.465 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:14.465 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:14.465 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:14.465 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:14.465 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:14.465 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:14.465 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:26:14.465 00:26:14.465 --- 10.0.0.2 ping statistics --- 00:26:14.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:14.465 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:26:14.465 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:14.465 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:14.465 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:26:14.465 00:26:14.465 --- 10.0.0.3 ping statistics --- 00:26:14.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:14.465 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:26:14.465 10:07:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:14.465 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:14.465 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:26:14.465 00:26:14.465 --- 10.0.0.1 ping statistics --- 00:26:14.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:14.465 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:26:14.465 10:07:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:14.465 10:07:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@433 -- # return 0 00:26:14.465 10:07:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:14.465 10:07:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:14.465 10:07:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:14.465 10:07:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:14.465 10:07:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:14.465 10:07:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:14.465 10:07:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:14.465 10:07:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:26:14.465 10:07:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:14.465 10:07:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:14.465 10:07:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:14.725 10:07:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:14.725 10:07:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=85892 00:26:14.725 10:07:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 85892 00:26:14.725 10:07:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 85892 ']' 00:26:14.725 10:07:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:14.725 10:07:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:14.725 10:07:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:14.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:14.725 10:07:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:14.725 10:07:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:14.725 [2024-07-15 10:07:28.089347] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:26:14.725 [2024-07-15 10:07:28.089416] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:14.725 [2024-07-15 10:07:28.230095] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:14.984 [2024-07-15 10:07:28.336560] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:14.984 [2024-07-15 10:07:28.336723] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:14.984 [2024-07-15 10:07:28.336762] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:14.984 [2024-07-15 10:07:28.336789] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:14.984 [2024-07-15 10:07:28.336805] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:14.984 [2024-07-15 10:07:28.337108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:14.984 [2024-07-15 10:07:28.337208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:14.984 [2024-07-15 10:07:28.337212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:15.551 10:07:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:15.551 10:07:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:26:15.551 10:07:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:15.551 10:07:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:15.551 10:07:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:15.551 10:07:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:15.551 10:07:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:15.551 10:07:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.551 10:07:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:15.551 [2024-07-15 10:07:29.030230] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:15.551 10:07:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.551 10:07:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:15.551 10:07:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.551 10:07:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:15.551 Malloc0 00:26:15.551 10:07:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.551 10:07:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:15.551 10:07:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.551 10:07:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:15.551 10:07:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.551 10:07:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:15.551 10:07:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.551 10:07:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:15.551 10:07:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.551 10:07:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:15.551 10:07:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.551 10:07:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:15.551 [2024-07-15 10:07:29.097733] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:15.551 10:07:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.551 10:07:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:15.551 10:07:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.551 10:07:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:15.551 [2024-07-15 10:07:29.109651] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:15.551 10:07:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.551 10:07:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:15.551 10:07:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.551 10:07:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:15.810 Malloc1 00:26:15.811 10:07:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.811 10:07:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:26:15.811 10:07:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.811 10:07:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:15.811 10:07:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.811 10:07:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:26:15.811 10:07:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.811 10:07:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:15.811 10:07:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.811 10:07:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:15.811 10:07:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.811 10:07:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:15.811 10:07:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.811 10:07:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:26:15.811 10:07:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.811 10:07:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:15.811 10:07:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.811 10:07:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=85944 00:26:15.811 10:07:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:26:15.811 10:07:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:15.811 10:07:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 85944 /var/tmp/bdevperf.sock 00:26:15.811 10:07:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 85944 ']' 00:26:15.811 10:07:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:15.811 10:07:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:15.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:15.811 10:07:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:15.811 10:07:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:15.811 10:07:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:16.773 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:16.773 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:26:16.773 10:07:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:26:16.773 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.773 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:16.773 NVMe0n1 00:26:16.773 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.773 10:07:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:26:16.773 10:07:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:16.773 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.773 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:16.773 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.773 1 00:26:16.773 10:07:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:26:16.773 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:26:16.773 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:26:16.773 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:16.773 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:16.773 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:16.773 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:16.773 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:26:16.773 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.773 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:16.773 2024/07/15 10:07:30 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:26:16.773 request: 00:26:16.773 { 00:26:16.773 "method": "bdev_nvme_attach_controller", 00:26:16.773 "params": { 00:26:16.773 "name": "NVMe0", 00:26:16.773 "trtype": "tcp", 00:26:16.773 "traddr": "10.0.0.2", 00:26:16.773 "adrfam": "ipv4", 00:26:16.773 "trsvcid": "4420", 00:26:16.773 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:16.773 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:26:16.773 "hostaddr": "10.0.0.2", 00:26:16.773 "hostsvcid": "60000", 00:26:16.773 "prchk_reftag": false, 00:26:16.773 "prchk_guard": false, 00:26:16.773 "hdgst": false, 00:26:16.773 "ddgst": false 00:26:16.773 } 00:26:16.773 } 00:26:16.773 Got JSON-RPC error response 00:26:16.773 GoRPCClient: error on JSON-RPC call 00:26:16.773 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:16.773 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:26:16.773 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:16.773 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:16.773 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:16.773 10:07:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:26:16.773 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:26:16.773 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:26:16.773 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:16.773 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:16.773 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:16.773 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:16.773 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:26:16.773 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.773 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:16.773 2024/07/15 10:07:30 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:26:16.773 request: 00:26:16.773 { 00:26:16.773 "method": "bdev_nvme_attach_controller", 00:26:16.773 "params": { 00:26:16.773 "name": "NVMe0", 00:26:16.773 "trtype": "tcp", 00:26:16.773 "traddr": "10.0.0.2", 00:26:16.773 "adrfam": "ipv4", 00:26:16.773 "trsvcid": "4420", 00:26:16.773 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:16.773 "hostaddr": "10.0.0.2", 00:26:16.773 "hostsvcid": "60000", 00:26:16.773 "prchk_reftag": false, 00:26:16.773 "prchk_guard": false, 00:26:16.773 "hdgst": false, 00:26:16.773 "ddgst": false 00:26:16.773 } 00:26:16.773 } 00:26:16.773 Got JSON-RPC error response 00:26:16.773 GoRPCClient: error on JSON-RPC call 00:26:16.773 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:16.773 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:26:16.773 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:16.773 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:16.773 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:16.773 10:07:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:26:16.773 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:26:16.774 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:26:16.774 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:16.774 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:16.774 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:16.774 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:16.774 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:26:16.774 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.774 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:16.774 2024/07/15 10:07:30 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:26:16.774 request: 00:26:16.774 { 00:26:16.774 "method": "bdev_nvme_attach_controller", 00:26:16.774 "params": { 00:26:16.774 "name": "NVMe0", 00:26:16.774 "trtype": "tcp", 00:26:16.774 "traddr": "10.0.0.2", 00:26:16.774 "adrfam": "ipv4", 00:26:16.774 "trsvcid": "4420", 00:26:16.774 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:16.774 "hostaddr": "10.0.0.2", 00:26:16.774 "hostsvcid": "60000", 00:26:16.774 "prchk_reftag": false, 00:26:16.774 "prchk_guard": false, 00:26:16.774 "hdgst": false, 00:26:16.774 "ddgst": false, 00:26:16.774 "multipath": "disable" 00:26:16.774 } 00:26:16.774 } 00:26:16.774 Got JSON-RPC error response 00:26:16.774 GoRPCClient: error on JSON-RPC call 00:26:16.774 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:16.774 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:26:16.774 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:16.774 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:16.774 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:16.774 10:07:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:26:16.774 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:26:16.774 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:26:16.774 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:16.774 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:16.774 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:16.774 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:16.774 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:26:16.774 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.774 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:16.774 2024/07/15 10:07:30 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:26:16.774 request: 00:26:16.774 { 00:26:16.774 "method": "bdev_nvme_attach_controller", 00:26:16.774 "params": { 00:26:16.774 "name": "NVMe0", 00:26:16.774 "trtype": "tcp", 00:26:16.774 "traddr": "10.0.0.2", 00:26:16.774 "adrfam": "ipv4", 00:26:16.774 "trsvcid": "4420", 00:26:16.774 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:16.774 "hostaddr": "10.0.0.2", 00:26:16.774 "hostsvcid": "60000", 00:26:16.774 "prchk_reftag": false, 00:26:16.774 "prchk_guard": false, 00:26:16.774 "hdgst": false, 00:26:16.774 "ddgst": false, 00:26:16.774 "multipath": "failover" 00:26:16.774 } 00:26:16.774 } 00:26:16.774 Got JSON-RPC error response 00:26:16.774 GoRPCClient: error on JSON-RPC call 00:26:16.774 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:16.774 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:26:16.774 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:16.774 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:16.774 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:16.774 10:07:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:16.774 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.774 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:16.774 00:26:16.774 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.774 10:07:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:16.774 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.774 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:17.033 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.033 10:07:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:26:17.033 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.033 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:17.033 00:26:17.033 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.033 10:07:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:17.033 10:07:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:26:17.033 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.033 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:17.033 10:07:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.033 10:07:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:26:17.033 10:07:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:18.413 0 00:26:18.413 10:07:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:26:18.413 10:07:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.413 10:07:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:18.413 10:07:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.413 10:07:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 85944 00:26:18.413 10:07:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 85944 ']' 00:26:18.413 10:07:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 85944 00:26:18.413 10:07:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:26:18.413 10:07:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:18.413 10:07:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85944 00:26:18.413 10:07:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:18.413 10:07:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:18.413 killing process with pid 85944 00:26:18.413 10:07:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85944' 00:26:18.413 10:07:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 85944 00:26:18.413 10:07:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 85944 00:26:18.413 10:07:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:18.413 10:07:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.413 10:07:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:18.413 10:07:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.413 10:07:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:18.413 10:07:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.413 10:07:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:18.413 10:07:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.413 10:07:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:26:18.413 10:07:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:26:18.413 10:07:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:26:18.413 10:07:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:26:18.413 10:07:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:26:18.413 10:07:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:26:18.413 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:26:18.413 [2024-07-15 10:07:29.236975] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:26:18.413 [2024-07-15 10:07:29.237067] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85944 ] 00:26:18.413 [2024-07-15 10:07:29.368516] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:18.413 [2024-07-15 10:07:29.472414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:18.413 [2024-07-15 10:07:30.426773] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 5a20cf1c-555a-4709-8c3d-7c710505fc95 already exists 00:26:18.413 [2024-07-15 10:07:30.426835] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:5a20cf1c-555a-4709-8c3d-7c710505fc95 alias for bdev NVMe1n1 00:26:18.413 [2024-07-15 10:07:30.426847] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:26:18.413 Running I/O for 1 seconds... 00:26:18.413 00:26:18.413 Latency(us) 00:26:18.413 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:18.413 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:26:18.414 NVMe0n1 : 1.01 25452.91 99.43 0.00 0.00 5021.86 2833.22 9558.53 00:26:18.414 =================================================================================================================== 00:26:18.414 Total : 25452.91 99.43 0.00 0.00 5021.86 2833.22 9558.53 00:26:18.414 Received shutdown signal, test time was about 1.000000 seconds 00:26:18.414 00:26:18.414 Latency(us) 00:26:18.414 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:18.414 =================================================================================================================== 00:26:18.414 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:18.414 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:26:18.414 10:07:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:26:18.414 10:07:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:26:18.414 10:07:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:26:18.414 10:07:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:18.414 10:07:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:26:18.414 10:07:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:18.414 10:07:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:26:18.414 10:07:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:18.414 10:07:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:18.414 rmmod nvme_tcp 00:26:18.414 rmmod nvme_fabrics 00:26:18.414 rmmod nvme_keyring 00:26:18.414 10:07:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:18.414 10:07:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:26:18.414 10:07:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:26:18.414 10:07:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 85892 ']' 00:26:18.414 10:07:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 85892 00:26:18.414 10:07:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 85892 ']' 00:26:18.414 10:07:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 85892 00:26:18.414 10:07:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:26:18.414 10:07:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:18.414 10:07:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85892 00:26:18.674 10:07:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:18.674 10:07:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:18.674 killing process with pid 85892 00:26:18.674 10:07:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85892' 00:26:18.674 10:07:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 85892 00:26:18.674 10:07:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 85892 00:26:18.674 10:07:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:18.674 10:07:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:18.674 10:07:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:18.674 10:07:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:18.674 10:07:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:18.674 10:07:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:18.674 10:07:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:18.674 10:07:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:18.934 10:07:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:26:18.934 00:26:18.934 real 0m4.867s 00:26:18.934 user 0m14.794s 00:26:18.934 sys 0m1.067s 00:26:18.934 10:07:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:18.934 10:07:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:18.934 ************************************ 00:26:18.934 END TEST nvmf_multicontroller 00:26:18.934 ************************************ 00:26:18.934 10:07:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:18.934 10:07:32 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:26:18.934 10:07:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:18.934 10:07:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:18.934 10:07:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:18.934 ************************************ 00:26:18.934 START TEST nvmf_aer 00:26:18.934 ************************************ 00:26:18.934 10:07:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:26:18.934 * Looking for test storage... 00:26:18.934 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:18.934 10:07:32 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:18.934 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:26:18.934 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:18.934 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:18.934 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:18.934 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:18.934 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:18.934 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:18.934 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:18.934 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:18.934 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:18.934 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:18.934 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:26:18.934 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:26:18.934 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:18.934 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:18.934 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:18.934 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:18.934 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:18.934 10:07:32 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:18.934 10:07:32 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:18.934 10:07:32 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:18.934 10:07:32 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.934 10:07:32 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.934 10:07:32 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.934 10:07:32 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:26:18.935 10:07:32 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.935 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:26:18.935 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:18.935 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:18.935 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:18.935 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:18.935 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:18.935 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:18.935 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:18.935 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:18.935 10:07:32 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:26:18.935 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:18.935 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:18.935 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:18.935 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:18.935 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:18.935 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:18.935 10:07:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:18.935 10:07:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:19.195 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:26:19.195 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:26:19.195 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:26:19.195 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:26:19.195 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:26:19.195 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@432 -- # nvmf_veth_init 00:26:19.195 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:19.195 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:19.195 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:19.195 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:19.195 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:19.195 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:19.195 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:19.195 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:19.195 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:19.195 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:19.195 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:19.195 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:19.195 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:19.195 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:19.195 Cannot find device "nvmf_tgt_br" 00:26:19.195 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@155 -- # true 00:26:19.195 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:19.195 Cannot find device "nvmf_tgt_br2" 00:26:19.195 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@156 -- # true 00:26:19.195 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:19.195 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:19.195 Cannot find device "nvmf_tgt_br" 00:26:19.195 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@158 -- # true 00:26:19.195 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:19.195 Cannot find device "nvmf_tgt_br2" 00:26:19.195 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@159 -- # true 00:26:19.195 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:19.195 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:19.195 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:19.195 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:19.195 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@162 -- # true 00:26:19.195 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:19.195 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:19.195 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@163 -- # true 00:26:19.195 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:19.195 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:19.195 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:19.195 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:19.195 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:19.195 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:19.195 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:19.195 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:19.195 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:19.195 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:19.195 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:19.195 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:19.195 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:19.455 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:19.455 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:19.455 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:19.455 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:19.455 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:19.455 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:19.455 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:19.455 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:19.455 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:19.455 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:19.455 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:19.455 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:19.455 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:26:19.455 00:26:19.455 --- 10.0.0.2 ping statistics --- 00:26:19.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:19.455 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:26:19.455 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:19.455 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:19.455 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:26:19.455 00:26:19.455 --- 10.0.0.3 ping statistics --- 00:26:19.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:19.455 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:26:19.455 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:19.455 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:19.455 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 00:26:19.455 00:26:19.455 --- 10.0.0.1 ping statistics --- 00:26:19.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:19.455 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:26:19.455 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:19.455 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@433 -- # return 0 00:26:19.455 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:19.455 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:19.455 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:19.455 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:19.455 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:19.455 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:19.455 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:19.456 10:07:32 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:26:19.456 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:19.456 10:07:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:19.456 10:07:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:19.456 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=86196 00:26:19.456 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:19.456 10:07:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 86196 00:26:19.456 10:07:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 86196 ']' 00:26:19.456 10:07:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:19.456 10:07:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:19.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:19.456 10:07:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:19.456 10:07:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:19.456 10:07:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:19.456 [2024-07-15 10:07:32.960326] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:26:19.456 [2024-07-15 10:07:32.960428] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:19.715 [2024-07-15 10:07:33.101345] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:19.715 [2024-07-15 10:07:33.204758] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:19.715 [2024-07-15 10:07:33.204804] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:19.715 [2024-07-15 10:07:33.204810] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:19.715 [2024-07-15 10:07:33.204815] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:19.715 [2024-07-15 10:07:33.204819] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:19.715 [2024-07-15 10:07:33.204945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:19.715 [2024-07-15 10:07:33.205018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:19.715 [2024-07-15 10:07:33.205341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:19.715 [2024-07-15 10:07:33.205346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:20.282 10:07:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:20.282 10:07:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:26:20.282 10:07:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:20.282 10:07:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:20.282 10:07:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:20.282 10:07:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:20.541 10:07:33 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:20.541 10:07:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.541 10:07:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:20.541 [2024-07-15 10:07:33.877541] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:20.541 10:07:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.541 10:07:33 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:26:20.541 10:07:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.541 10:07:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:20.541 Malloc0 00:26:20.541 10:07:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.541 10:07:33 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:26:20.541 10:07:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.541 10:07:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:20.541 10:07:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.541 10:07:33 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:20.541 10:07:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.541 10:07:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:20.541 10:07:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.541 10:07:33 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:20.541 10:07:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.541 10:07:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:20.541 [2024-07-15 10:07:33.946727] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:20.541 10:07:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.541 10:07:33 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:26:20.541 10:07:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.541 10:07:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:20.541 [ 00:26:20.541 { 00:26:20.541 "allow_any_host": true, 00:26:20.541 "hosts": [], 00:26:20.541 "listen_addresses": [], 00:26:20.541 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:20.541 "subtype": "Discovery" 00:26:20.541 }, 00:26:20.541 { 00:26:20.541 "allow_any_host": true, 00:26:20.541 "hosts": [], 00:26:20.541 "listen_addresses": [ 00:26:20.541 { 00:26:20.541 "adrfam": "IPv4", 00:26:20.541 "traddr": "10.0.0.2", 00:26:20.541 "trsvcid": "4420", 00:26:20.541 "trtype": "TCP" 00:26:20.541 } 00:26:20.541 ], 00:26:20.541 "max_cntlid": 65519, 00:26:20.541 "max_namespaces": 2, 00:26:20.541 "min_cntlid": 1, 00:26:20.541 "model_number": "SPDK bdev Controller", 00:26:20.541 "namespaces": [ 00:26:20.541 { 00:26:20.541 "bdev_name": "Malloc0", 00:26:20.541 "name": "Malloc0", 00:26:20.541 "nguid": "2D3E08F291A84BC79A89213562763778", 00:26:20.541 "nsid": 1, 00:26:20.541 "uuid": "2d3e08f2-91a8-4bc7-9a89-213562763778" 00:26:20.541 } 00:26:20.541 ], 00:26:20.541 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:20.541 "serial_number": "SPDK00000000000001", 00:26:20.541 "subtype": "NVMe" 00:26:20.541 } 00:26:20.541 ] 00:26:20.541 10:07:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.541 10:07:33 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:26:20.541 10:07:33 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:26:20.541 10:07:33 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=86251 00:26:20.541 10:07:33 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:26:20.541 10:07:33 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:26:20.541 10:07:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:26:20.541 10:07:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:20.541 10:07:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:26:20.541 10:07:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:26:20.541 10:07:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:26:20.541 10:07:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:20.541 10:07:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:26:20.541 10:07:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:26:20.541 10:07:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:26:20.800 10:07:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:20.800 10:07:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:20.800 10:07:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:26:20.801 10:07:34 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:26:20.801 10:07:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.801 10:07:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:20.801 Malloc1 00:26:20.801 10:07:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.801 10:07:34 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:26:20.801 10:07:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.801 10:07:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:20.801 10:07:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.801 10:07:34 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:26:20.801 10:07:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.801 10:07:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:20.801 Asynchronous Event Request test 00:26:20.801 Attaching to 10.0.0.2 00:26:20.801 Attached to 10.0.0.2 00:26:20.801 Registering asynchronous event callbacks... 00:26:20.801 Starting namespace attribute notice tests for all controllers... 00:26:20.801 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:26:20.801 aer_cb - Changed Namespace 00:26:20.801 Cleaning up... 00:26:20.801 [ 00:26:20.801 { 00:26:20.801 "allow_any_host": true, 00:26:20.801 "hosts": [], 00:26:20.801 "listen_addresses": [], 00:26:20.801 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:20.801 "subtype": "Discovery" 00:26:20.801 }, 00:26:20.801 { 00:26:20.801 "allow_any_host": true, 00:26:20.801 "hosts": [], 00:26:20.801 "listen_addresses": [ 00:26:20.801 { 00:26:20.801 "adrfam": "IPv4", 00:26:20.801 "traddr": "10.0.0.2", 00:26:20.801 "trsvcid": "4420", 00:26:20.801 "trtype": "TCP" 00:26:20.801 } 00:26:20.801 ], 00:26:20.801 "max_cntlid": 65519, 00:26:20.801 "max_namespaces": 2, 00:26:20.801 "min_cntlid": 1, 00:26:20.801 "model_number": "SPDK bdev Controller", 00:26:20.801 "namespaces": [ 00:26:20.801 { 00:26:20.801 "bdev_name": "Malloc0", 00:26:20.801 "name": "Malloc0", 00:26:20.801 "nguid": "2D3E08F291A84BC79A89213562763778", 00:26:20.801 "nsid": 1, 00:26:20.801 "uuid": "2d3e08f2-91a8-4bc7-9a89-213562763778" 00:26:20.801 }, 00:26:20.801 { 00:26:20.801 "bdev_name": "Malloc1", 00:26:20.801 "name": "Malloc1", 00:26:20.801 "nguid": "4E36FB87B43346288A983D34BE138043", 00:26:20.801 "nsid": 2, 00:26:20.801 "uuid": "4e36fb87-b433-4628-8a98-3d34be138043" 00:26:20.801 } 00:26:20.801 ], 00:26:20.801 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:20.801 "serial_number": "SPDK00000000000001", 00:26:20.801 "subtype": "NVMe" 00:26:20.801 } 00:26:20.801 ] 00:26:20.801 10:07:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.801 10:07:34 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 86251 00:26:20.801 10:07:34 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:26:20.801 10:07:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.801 10:07:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:20.801 10:07:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.801 10:07:34 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:26:20.801 10:07:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.801 10:07:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:20.801 10:07:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.801 10:07:34 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:20.801 10:07:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.801 10:07:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:20.801 10:07:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.801 10:07:34 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:26:20.801 10:07:34 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:26:20.801 10:07:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:20.801 10:07:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:26:21.060 10:07:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:21.060 10:07:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:26:21.060 10:07:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:21.060 10:07:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:21.060 rmmod nvme_tcp 00:26:21.060 rmmod nvme_fabrics 00:26:21.060 rmmod nvme_keyring 00:26:21.060 10:07:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:21.060 10:07:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:26:21.060 10:07:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:26:21.060 10:07:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 86196 ']' 00:26:21.060 10:07:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 86196 00:26:21.060 10:07:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 86196 ']' 00:26:21.060 10:07:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 86196 00:26:21.060 10:07:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:26:21.060 10:07:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:21.060 10:07:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86196 00:26:21.060 10:07:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:21.060 10:07:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:21.060 10:07:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86196' 00:26:21.060 killing process with pid 86196 00:26:21.060 10:07:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 86196 00:26:21.060 10:07:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 86196 00:26:21.320 10:07:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:21.320 10:07:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:21.320 10:07:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:21.320 10:07:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:21.320 10:07:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:21.320 10:07:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:21.320 10:07:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:21.320 10:07:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:21.320 10:07:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:26:21.320 00:26:21.320 real 0m2.368s 00:26:21.320 user 0m6.145s 00:26:21.320 sys 0m0.709s 00:26:21.320 10:07:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:21.320 10:07:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:21.320 ************************************ 00:26:21.320 END TEST nvmf_aer 00:26:21.320 ************************************ 00:26:21.320 10:07:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:21.320 10:07:34 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:26:21.320 10:07:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:21.320 10:07:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:21.320 10:07:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:21.320 ************************************ 00:26:21.320 START TEST nvmf_async_init 00:26:21.320 ************************************ 00:26:21.320 10:07:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:26:21.320 * Looking for test storage... 00:26:21.320 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:21.320 10:07:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:21.320 10:07:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:26:21.320 10:07:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:21.320 10:07:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:21.320 10:07:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:21.320 10:07:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:21.320 10:07:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:21.320 10:07:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:21.320 10:07:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:21.320 10:07:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:21.320 10:07:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:21.320 10:07:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=6822b8baa24e433c8c4e37bb1544a02a 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@432 -- # nvmf_veth_init 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:21.579 Cannot find device "nvmf_tgt_br" 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@155 -- # true 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:21.579 Cannot find device "nvmf_tgt_br2" 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@156 -- # true 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:21.579 10:07:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:21.579 Cannot find device "nvmf_tgt_br" 00:26:21.579 10:07:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@158 -- # true 00:26:21.579 10:07:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:21.579 Cannot find device "nvmf_tgt_br2" 00:26:21.579 10:07:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@159 -- # true 00:26:21.579 10:07:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:21.579 10:07:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:21.580 10:07:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:21.580 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:21.580 10:07:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@162 -- # true 00:26:21.580 10:07:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:21.580 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:21.580 10:07:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@163 -- # true 00:26:21.580 10:07:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:21.580 10:07:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:21.580 10:07:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:21.580 10:07:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:21.580 10:07:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:21.580 10:07:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:21.580 10:07:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:21.580 10:07:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:21.580 10:07:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:21.840 10:07:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:21.840 10:07:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:21.840 10:07:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:21.840 10:07:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:21.840 10:07:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:21.840 10:07:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:21.840 10:07:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:21.840 10:07:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:21.840 10:07:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:21.840 10:07:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:21.840 10:07:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:21.840 10:07:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:21.840 10:07:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:21.840 10:07:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:21.840 10:07:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:21.840 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:21.840 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:26:21.840 00:26:21.840 --- 10.0.0.2 ping statistics --- 00:26:21.840 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:21.840 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:26:21.840 10:07:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:21.840 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:21.840 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:26:21.840 00:26:21.840 --- 10.0.0.3 ping statistics --- 00:26:21.840 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:21.840 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:26:21.840 10:07:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:21.840 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:21.840 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.058 ms 00:26:21.840 00:26:21.840 --- 10.0.0.1 ping statistics --- 00:26:21.840 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:21.840 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:26:21.840 10:07:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:21.840 10:07:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@433 -- # return 0 00:26:21.840 10:07:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:21.840 10:07:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:21.840 10:07:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:21.840 10:07:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:21.840 10:07:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:21.840 10:07:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:21.840 10:07:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:21.840 10:07:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:26:21.840 10:07:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:21.840 10:07:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:21.840 10:07:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:21.840 10:07:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=86412 00:26:21.840 10:07:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 86412 00:26:21.840 10:07:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 86412 ']' 00:26:21.840 10:07:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:21.840 10:07:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:21.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:21.840 10:07:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:21.840 10:07:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:21.840 10:07:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:21.840 10:07:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:26:21.840 [2024-07-15 10:07:35.321814] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:26:21.840 [2024-07-15 10:07:35.321880] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:22.099 [2024-07-15 10:07:35.457986] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:22.099 [2024-07-15 10:07:35.560535] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:22.099 [2024-07-15 10:07:35.560580] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:22.099 [2024-07-15 10:07:35.560586] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:22.099 [2024-07-15 10:07:35.560591] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:22.099 [2024-07-15 10:07:35.560595] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:22.099 [2024-07-15 10:07:35.560617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:22.714 10:07:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:22.714 10:07:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:26:22.714 10:07:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:22.714 10:07:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:22.714 10:07:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:22.714 10:07:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:22.714 10:07:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:22.714 10:07:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.714 10:07:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:22.714 [2024-07-15 10:07:36.222718] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:22.714 10:07:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.714 10:07:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:26:22.714 10:07:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.714 10:07:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:22.714 null0 00:26:22.714 10:07:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.714 10:07:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:26:22.714 10:07:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.714 10:07:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:22.714 10:07:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.714 10:07:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:26:22.714 10:07:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.714 10:07:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:22.714 10:07:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.714 10:07:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 6822b8baa24e433c8c4e37bb1544a02a 00:26:22.714 10:07:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.714 10:07:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:22.715 10:07:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.715 10:07:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:22.715 10:07:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.715 10:07:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:22.715 [2024-07-15 10:07:36.278695] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:22.715 10:07:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.715 10:07:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:26:22.715 10:07:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.715 10:07:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:22.975 nvme0n1 00:26:22.975 10:07:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.975 10:07:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:22.975 10:07:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.975 10:07:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:22.975 [ 00:26:22.975 { 00:26:22.975 "aliases": [ 00:26:22.975 "6822b8ba-a24e-433c-8c4e-37bb1544a02a" 00:26:22.975 ], 00:26:22.975 "assigned_rate_limits": { 00:26:22.975 "r_mbytes_per_sec": 0, 00:26:22.975 "rw_ios_per_sec": 0, 00:26:22.975 "rw_mbytes_per_sec": 0, 00:26:22.975 "w_mbytes_per_sec": 0 00:26:22.975 }, 00:26:22.975 "block_size": 512, 00:26:22.975 "claimed": false, 00:26:22.975 "driver_specific": { 00:26:22.975 "mp_policy": "active_passive", 00:26:22.975 "nvme": [ 00:26:22.975 { 00:26:22.975 "ctrlr_data": { 00:26:22.975 "ana_reporting": false, 00:26:22.975 "cntlid": 1, 00:26:22.975 "firmware_revision": "24.09", 00:26:22.975 "model_number": "SPDK bdev Controller", 00:26:22.975 "multi_ctrlr": true, 00:26:22.975 "oacs": { 00:26:22.975 "firmware": 0, 00:26:22.975 "format": 0, 00:26:22.975 "ns_manage": 0, 00:26:22.975 "security": 0 00:26:22.975 }, 00:26:22.975 "serial_number": "00000000000000000000", 00:26:22.975 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:22.975 "vendor_id": "0x8086" 00:26:22.975 }, 00:26:22.975 "ns_data": { 00:26:22.975 "can_share": true, 00:26:22.975 "id": 1 00:26:22.975 }, 00:26:22.975 "trid": { 00:26:22.975 "adrfam": "IPv4", 00:26:22.975 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:22.975 "traddr": "10.0.0.2", 00:26:22.975 "trsvcid": "4420", 00:26:22.975 "trtype": "TCP" 00:26:22.975 }, 00:26:22.975 "vs": { 00:26:22.975 "nvme_version": "1.3" 00:26:22.975 } 00:26:22.975 } 00:26:22.975 ] 00:26:22.975 }, 00:26:22.975 "memory_domains": [ 00:26:22.975 { 00:26:22.975 "dma_device_id": "system", 00:26:22.975 "dma_device_type": 1 00:26:22.975 } 00:26:22.975 ], 00:26:22.975 "name": "nvme0n1", 00:26:22.975 "num_blocks": 2097152, 00:26:22.975 "product_name": "NVMe disk", 00:26:22.975 "supported_io_types": { 00:26:22.975 "abort": true, 00:26:22.975 "compare": true, 00:26:22.975 "compare_and_write": true, 00:26:22.975 "copy": true, 00:26:22.975 "flush": true, 00:26:22.975 "get_zone_info": false, 00:26:22.975 "nvme_admin": true, 00:26:22.975 "nvme_io": true, 00:26:22.975 "nvme_io_md": false, 00:26:22.975 "nvme_iov_md": false, 00:26:22.975 "read": true, 00:26:22.975 "reset": true, 00:26:22.975 "seek_data": false, 00:26:22.975 "seek_hole": false, 00:26:22.975 "unmap": false, 00:26:22.975 "write": true, 00:26:22.975 "write_zeroes": true, 00:26:22.975 "zcopy": false, 00:26:22.975 "zone_append": false, 00:26:22.975 "zone_management": false 00:26:22.975 }, 00:26:22.975 "uuid": "6822b8ba-a24e-433c-8c4e-37bb1544a02a", 00:26:22.975 "zoned": false 00:26:22.975 } 00:26:22.975 ] 00:26:22.975 10:07:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.975 10:07:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:26:22.975 10:07:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.975 10:07:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:22.975 [2024-07-15 10:07:36.546101] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:22.975 [2024-07-15 10:07:36.546184] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x185ba30 (9): Bad file descriptor 00:26:23.236 [2024-07-15 10:07:36.677795] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:23.236 10:07:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.236 10:07:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:23.236 10:07:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.236 10:07:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:23.236 [ 00:26:23.236 { 00:26:23.236 "aliases": [ 00:26:23.236 "6822b8ba-a24e-433c-8c4e-37bb1544a02a" 00:26:23.236 ], 00:26:23.236 "assigned_rate_limits": { 00:26:23.236 "r_mbytes_per_sec": 0, 00:26:23.236 "rw_ios_per_sec": 0, 00:26:23.236 "rw_mbytes_per_sec": 0, 00:26:23.236 "w_mbytes_per_sec": 0 00:26:23.236 }, 00:26:23.236 "block_size": 512, 00:26:23.236 "claimed": false, 00:26:23.236 "driver_specific": { 00:26:23.236 "mp_policy": "active_passive", 00:26:23.236 "nvme": [ 00:26:23.236 { 00:26:23.236 "ctrlr_data": { 00:26:23.236 "ana_reporting": false, 00:26:23.236 "cntlid": 2, 00:26:23.236 "firmware_revision": "24.09", 00:26:23.236 "model_number": "SPDK bdev Controller", 00:26:23.236 "multi_ctrlr": true, 00:26:23.236 "oacs": { 00:26:23.236 "firmware": 0, 00:26:23.236 "format": 0, 00:26:23.236 "ns_manage": 0, 00:26:23.236 "security": 0 00:26:23.236 }, 00:26:23.236 "serial_number": "00000000000000000000", 00:26:23.236 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:23.236 "vendor_id": "0x8086" 00:26:23.236 }, 00:26:23.236 "ns_data": { 00:26:23.236 "can_share": true, 00:26:23.236 "id": 1 00:26:23.236 }, 00:26:23.236 "trid": { 00:26:23.236 "adrfam": "IPv4", 00:26:23.236 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:23.236 "traddr": "10.0.0.2", 00:26:23.236 "trsvcid": "4420", 00:26:23.236 "trtype": "TCP" 00:26:23.236 }, 00:26:23.236 "vs": { 00:26:23.236 "nvme_version": "1.3" 00:26:23.236 } 00:26:23.236 } 00:26:23.236 ] 00:26:23.236 }, 00:26:23.236 "memory_domains": [ 00:26:23.236 { 00:26:23.236 "dma_device_id": "system", 00:26:23.236 "dma_device_type": 1 00:26:23.236 } 00:26:23.236 ], 00:26:23.236 "name": "nvme0n1", 00:26:23.236 "num_blocks": 2097152, 00:26:23.236 "product_name": "NVMe disk", 00:26:23.236 "supported_io_types": { 00:26:23.236 "abort": true, 00:26:23.236 "compare": true, 00:26:23.236 "compare_and_write": true, 00:26:23.236 "copy": true, 00:26:23.236 "flush": true, 00:26:23.236 "get_zone_info": false, 00:26:23.236 "nvme_admin": true, 00:26:23.236 "nvme_io": true, 00:26:23.236 "nvme_io_md": false, 00:26:23.236 "nvme_iov_md": false, 00:26:23.236 "read": true, 00:26:23.236 "reset": true, 00:26:23.236 "seek_data": false, 00:26:23.236 "seek_hole": false, 00:26:23.236 "unmap": false, 00:26:23.236 "write": true, 00:26:23.236 "write_zeroes": true, 00:26:23.236 "zcopy": false, 00:26:23.236 "zone_append": false, 00:26:23.236 "zone_management": false 00:26:23.236 }, 00:26:23.236 "uuid": "6822b8ba-a24e-433c-8c4e-37bb1544a02a", 00:26:23.236 "zoned": false 00:26:23.237 } 00:26:23.237 ] 00:26:23.237 10:07:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.237 10:07:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.237 10:07:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.237 10:07:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:23.237 10:07:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.237 10:07:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:26:23.237 10:07:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.vaBn1qkFPb 00:26:23.237 10:07:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:26:23.237 10:07:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.vaBn1qkFPb 00:26:23.237 10:07:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:26:23.237 10:07:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.237 10:07:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:23.237 10:07:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.237 10:07:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:26:23.237 10:07:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.237 10:07:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:23.237 [2024-07-15 10:07:36.753849] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:23.237 [2024-07-15 10:07:36.754006] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:23.237 10:07:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.237 10:07:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.vaBn1qkFPb 00:26:23.237 10:07:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.237 10:07:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:23.237 [2024-07-15 10:07:36.765842] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:26:23.237 10:07:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.237 10:07:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.vaBn1qkFPb 00:26:23.237 10:07:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.237 10:07:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:23.237 [2024-07-15 10:07:36.777814] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:23.237 [2024-07-15 10:07:36.777865] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:26:23.497 nvme0n1 00:26:23.497 10:07:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.497 10:07:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:23.497 10:07:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.497 10:07:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:23.497 [ 00:26:23.497 { 00:26:23.497 "aliases": [ 00:26:23.497 "6822b8ba-a24e-433c-8c4e-37bb1544a02a" 00:26:23.497 ], 00:26:23.497 "assigned_rate_limits": { 00:26:23.497 "r_mbytes_per_sec": 0, 00:26:23.497 "rw_ios_per_sec": 0, 00:26:23.497 "rw_mbytes_per_sec": 0, 00:26:23.497 "w_mbytes_per_sec": 0 00:26:23.497 }, 00:26:23.497 "block_size": 512, 00:26:23.497 "claimed": false, 00:26:23.497 "driver_specific": { 00:26:23.497 "mp_policy": "active_passive", 00:26:23.497 "nvme": [ 00:26:23.497 { 00:26:23.497 "ctrlr_data": { 00:26:23.497 "ana_reporting": false, 00:26:23.497 "cntlid": 3, 00:26:23.497 "firmware_revision": "24.09", 00:26:23.497 "model_number": "SPDK bdev Controller", 00:26:23.497 "multi_ctrlr": true, 00:26:23.497 "oacs": { 00:26:23.497 "firmware": 0, 00:26:23.497 "format": 0, 00:26:23.497 "ns_manage": 0, 00:26:23.497 "security": 0 00:26:23.497 }, 00:26:23.497 "serial_number": "00000000000000000000", 00:26:23.497 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:23.497 "vendor_id": "0x8086" 00:26:23.497 }, 00:26:23.497 "ns_data": { 00:26:23.497 "can_share": true, 00:26:23.497 "id": 1 00:26:23.497 }, 00:26:23.497 "trid": { 00:26:23.497 "adrfam": "IPv4", 00:26:23.497 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:23.497 "traddr": "10.0.0.2", 00:26:23.497 "trsvcid": "4421", 00:26:23.497 "trtype": "TCP" 00:26:23.497 }, 00:26:23.497 "vs": { 00:26:23.497 "nvme_version": "1.3" 00:26:23.497 } 00:26:23.497 } 00:26:23.497 ] 00:26:23.497 }, 00:26:23.497 "memory_domains": [ 00:26:23.497 { 00:26:23.497 "dma_device_id": "system", 00:26:23.497 "dma_device_type": 1 00:26:23.497 } 00:26:23.497 ], 00:26:23.497 "name": "nvme0n1", 00:26:23.497 "num_blocks": 2097152, 00:26:23.497 "product_name": "NVMe disk", 00:26:23.497 "supported_io_types": { 00:26:23.497 "abort": true, 00:26:23.497 "compare": true, 00:26:23.497 "compare_and_write": true, 00:26:23.497 "copy": true, 00:26:23.497 "flush": true, 00:26:23.497 "get_zone_info": false, 00:26:23.497 "nvme_admin": true, 00:26:23.497 "nvme_io": true, 00:26:23.497 "nvme_io_md": false, 00:26:23.497 "nvme_iov_md": false, 00:26:23.497 "read": true, 00:26:23.497 "reset": true, 00:26:23.497 "seek_data": false, 00:26:23.497 "seek_hole": false, 00:26:23.497 "unmap": false, 00:26:23.497 "write": true, 00:26:23.497 "write_zeroes": true, 00:26:23.497 "zcopy": false, 00:26:23.497 "zone_append": false, 00:26:23.497 "zone_management": false 00:26:23.497 }, 00:26:23.497 "uuid": "6822b8ba-a24e-433c-8c4e-37bb1544a02a", 00:26:23.497 "zoned": false 00:26:23.497 } 00:26:23.497 ] 00:26:23.497 10:07:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.497 10:07:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.497 10:07:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.497 10:07:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:23.497 10:07:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.497 10:07:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.vaBn1qkFPb 00:26:23.497 10:07:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:26:23.497 10:07:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:26:23.497 10:07:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:23.497 10:07:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:26:23.497 10:07:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:23.497 10:07:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:26:23.497 10:07:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:23.497 10:07:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:23.497 rmmod nvme_tcp 00:26:23.497 rmmod nvme_fabrics 00:26:23.497 rmmod nvme_keyring 00:26:23.497 10:07:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:23.497 10:07:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:26:23.497 10:07:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:26:23.497 10:07:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 86412 ']' 00:26:23.497 10:07:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 86412 00:26:23.497 10:07:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 86412 ']' 00:26:23.497 10:07:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 86412 00:26:23.497 10:07:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:26:23.497 10:07:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:23.497 10:07:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86412 00:26:23.497 killing process with pid 86412 00:26:23.497 10:07:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:23.497 10:07:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:23.497 10:07:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86412' 00:26:23.497 10:07:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 86412 00:26:23.497 [2024-07-15 10:07:37.057095] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:26:23.497 [2024-07-15 10:07:37.057128] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:26:23.497 10:07:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 86412 00:26:23.756 10:07:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:23.756 10:07:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:23.756 10:07:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:23.756 10:07:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:23.756 10:07:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:23.756 10:07:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:23.756 10:07:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:23.756 10:07:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:23.756 10:07:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:26:23.756 ************************************ 00:26:23.756 END TEST nvmf_async_init 00:26:23.756 ************************************ 00:26:23.756 00:26:23.756 real 0m2.490s 00:26:23.756 user 0m2.264s 00:26:23.756 sys 0m0.629s 00:26:23.756 10:07:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:23.756 10:07:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:23.756 10:07:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:23.756 10:07:37 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:26:23.756 10:07:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:23.756 10:07:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:23.756 10:07:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:23.756 ************************************ 00:26:23.756 START TEST dma 00:26:23.756 ************************************ 00:26:23.756 10:07:37 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:26:24.017 * Looking for test storage... 00:26:24.017 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:24.017 10:07:37 nvmf_tcp.dma -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:24.017 10:07:37 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:26:24.017 10:07:37 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:24.017 10:07:37 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:24.017 10:07:37 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:24.017 10:07:37 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:24.017 10:07:37 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:24.017 10:07:37 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:24.017 10:07:37 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:24.017 10:07:37 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:24.017 10:07:37 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:24.017 10:07:37 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:24.017 10:07:37 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:26:24.017 10:07:37 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:26:24.017 10:07:37 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:24.017 10:07:37 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:24.017 10:07:37 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:24.017 10:07:37 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:24.017 10:07:37 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:24.017 10:07:37 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:24.017 10:07:37 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:24.017 10:07:37 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:24.017 10:07:37 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.017 10:07:37 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.017 10:07:37 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.017 10:07:37 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:26:24.017 10:07:37 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.017 10:07:37 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:26:24.017 10:07:37 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:24.017 10:07:37 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:24.018 10:07:37 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:24.018 10:07:37 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:24.018 10:07:37 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:24.018 10:07:37 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:24.018 10:07:37 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:24.018 10:07:37 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:24.018 10:07:37 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:26:24.018 10:07:37 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:26:24.018 00:26:24.018 real 0m0.160s 00:26:24.018 user 0m0.082s 00:26:24.018 sys 0m0.087s 00:26:24.018 10:07:37 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:24.018 10:07:37 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:26:24.018 ************************************ 00:26:24.018 END TEST dma 00:26:24.018 ************************************ 00:26:24.018 10:07:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:24.018 10:07:37 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:26:24.018 10:07:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:24.018 10:07:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:24.018 10:07:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:24.018 ************************************ 00:26:24.018 START TEST nvmf_identify 00:26:24.018 ************************************ 00:26:24.018 10:07:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:26:24.278 * Looking for test storage... 00:26:24.278 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:24.278 10:07:37 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:24.278 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:26:24.278 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:24.278 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:24.278 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:24.278 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:24.278 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:24.278 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:24.278 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:24.278 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:24.278 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:24.278 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:24.278 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:26:24.278 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:26:24.278 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:24.278 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:24.278 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:24.278 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:24.278 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:24.278 10:07:37 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:24.278 10:07:37 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:24.278 10:07:37 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:24.278 10:07:37 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.278 10:07:37 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.279 10:07:37 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.279 10:07:37 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:26:24.279 10:07:37 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.279 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:26:24.279 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:24.279 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:24.279 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:24.279 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:24.279 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:24.279 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:24.279 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:24.279 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:24.279 10:07:37 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:24.279 10:07:37 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:24.279 10:07:37 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:26:24.279 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:24.279 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:24.279 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:24.279 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:24.279 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:24.279 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:24.279 10:07:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:24.279 10:07:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:24.279 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:26:24.279 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:26:24.279 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:26:24.279 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:26:24.279 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:26:24.279 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:26:24.279 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:24.279 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:24.279 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:24.279 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:24.279 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:24.279 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:24.279 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:24.279 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:24.279 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:24.279 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:24.279 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:24.279 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:24.279 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:24.279 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:24.279 Cannot find device "nvmf_tgt_br" 00:26:24.279 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # true 00:26:24.279 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:24.279 Cannot find device "nvmf_tgt_br2" 00:26:24.279 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # true 00:26:24.279 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:24.279 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:24.279 Cannot find device "nvmf_tgt_br" 00:26:24.279 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # true 00:26:24.279 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:24.279 Cannot find device "nvmf_tgt_br2" 00:26:24.279 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # true 00:26:24.279 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:24.279 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:24.539 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:24.539 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:24.539 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # true 00:26:24.539 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:24.539 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:24.539 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # true 00:26:24.539 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:24.539 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:24.539 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:24.539 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:24.539 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:24.539 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:24.539 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:24.539 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:24.539 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:24.539 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:24.539 10:07:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:24.539 10:07:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:24.539 10:07:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:24.539 10:07:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:24.539 10:07:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:24.539 10:07:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:24.539 10:07:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:24.539 10:07:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:24.539 10:07:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:24.539 10:07:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:24.539 10:07:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:24.539 10:07:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:24.539 10:07:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:24.539 10:07:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:24.539 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:24.539 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:26:24.539 00:26:24.539 --- 10.0.0.2 ping statistics --- 00:26:24.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:24.539 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:26:24.539 10:07:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:24.539 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:24.539 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:26:24.539 00:26:24.539 --- 10.0.0.3 ping statistics --- 00:26:24.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:24.539 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:26:24.539 10:07:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:24.799 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:24.799 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:26:24.799 00:26:24.799 --- 10.0.0.1 ping statistics --- 00:26:24.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:24.799 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:26:24.799 10:07:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:24.799 10:07:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:26:24.799 10:07:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:24.799 10:07:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:24.799 10:07:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:24.799 10:07:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:24.799 10:07:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:24.799 10:07:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:24.799 10:07:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:24.799 10:07:38 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:26:24.799 10:07:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:24.799 10:07:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:24.799 10:07:38 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:24.799 10:07:38 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=86684 00:26:24.799 10:07:38 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:24.799 10:07:38 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 86684 00:26:24.799 10:07:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 86684 ']' 00:26:24.799 10:07:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:24.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:24.799 10:07:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:24.799 10:07:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:24.799 10:07:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:24.799 10:07:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:24.799 [2024-07-15 10:07:38.189777] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:26:24.799 [2024-07-15 10:07:38.189836] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:24.799 [2024-07-15 10:07:38.327311] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:25.059 [2024-07-15 10:07:38.432783] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:25.059 [2024-07-15 10:07:38.432833] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:25.059 [2024-07-15 10:07:38.432839] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:25.059 [2024-07-15 10:07:38.432844] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:25.059 [2024-07-15 10:07:38.432848] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:25.059 [2024-07-15 10:07:38.433057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:25.059 [2024-07-15 10:07:38.433259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:25.059 [2024-07-15 10:07:38.433439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:25.059 [2024-07-15 10:07:38.433444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:25.629 10:07:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:25.629 10:07:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:26:25.629 10:07:39 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:25.629 10:07:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.629 10:07:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:25.629 [2024-07-15 10:07:39.077711] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:25.629 10:07:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.629 10:07:39 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:26:25.629 10:07:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:25.629 10:07:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:25.629 10:07:39 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:25.629 10:07:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.629 10:07:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:25.629 Malloc0 00:26:25.629 10:07:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.629 10:07:39 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:25.629 10:07:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.629 10:07:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:25.629 10:07:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.629 10:07:39 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:26:25.629 10:07:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.629 10:07:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:25.629 10:07:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.629 10:07:39 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:25.629 10:07:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.629 10:07:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:25.629 [2024-07-15 10:07:39.200679] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:25.630 10:07:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.630 10:07:39 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:25.630 10:07:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.630 10:07:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:25.890 10:07:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.890 10:07:39 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:26:25.891 10:07:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.891 10:07:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:25.891 [ 00:26:25.891 { 00:26:25.891 "allow_any_host": true, 00:26:25.891 "hosts": [], 00:26:25.891 "listen_addresses": [ 00:26:25.891 { 00:26:25.891 "adrfam": "IPv4", 00:26:25.891 "traddr": "10.0.0.2", 00:26:25.891 "trsvcid": "4420", 00:26:25.891 "trtype": "TCP" 00:26:25.891 } 00:26:25.891 ], 00:26:25.891 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:25.891 "subtype": "Discovery" 00:26:25.891 }, 00:26:25.891 { 00:26:25.891 "allow_any_host": true, 00:26:25.891 "hosts": [], 00:26:25.891 "listen_addresses": [ 00:26:25.891 { 00:26:25.891 "adrfam": "IPv4", 00:26:25.891 "traddr": "10.0.0.2", 00:26:25.891 "trsvcid": "4420", 00:26:25.891 "trtype": "TCP" 00:26:25.891 } 00:26:25.891 ], 00:26:25.891 "max_cntlid": 65519, 00:26:25.891 "max_namespaces": 32, 00:26:25.891 "min_cntlid": 1, 00:26:25.891 "model_number": "SPDK bdev Controller", 00:26:25.891 "namespaces": [ 00:26:25.891 { 00:26:25.891 "bdev_name": "Malloc0", 00:26:25.891 "eui64": "ABCDEF0123456789", 00:26:25.891 "name": "Malloc0", 00:26:25.891 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:26:25.891 "nsid": 1, 00:26:25.891 "uuid": "2d9de51d-f869-4427-99b3-75e2fc81ec4b" 00:26:25.891 } 00:26:25.891 ], 00:26:25.891 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:25.891 "serial_number": "SPDK00000000000001", 00:26:25.891 "subtype": "NVMe" 00:26:25.891 } 00:26:25.891 ] 00:26:25.891 10:07:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.891 10:07:39 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:26:25.891 [2024-07-15 10:07:39.268441] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:26:25.891 [2024-07-15 10:07:39.268496] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86745 ] 00:26:25.891 [2024-07-15 10:07:39.400279] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:26:25.891 [2024-07-15 10:07:39.400339] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:26:25.891 [2024-07-15 10:07:39.400343] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:26:25.891 [2024-07-15 10:07:39.400355] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:26:25.891 [2024-07-15 10:07:39.400361] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:26:25.891 [2024-07-15 10:07:39.400481] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:26:25.891 [2024-07-15 10:07:39.400514] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xc3ba60 0 00:26:25.891 [2024-07-15 10:07:39.415669] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:26:25.891 [2024-07-15 10:07:39.415700] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:26:25.891 [2024-07-15 10:07:39.415703] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:26:25.891 [2024-07-15 10:07:39.415706] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:26:25.891 [2024-07-15 10:07:39.415744] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.891 [2024-07-15 10:07:39.415749] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.891 [2024-07-15 10:07:39.415752] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc3ba60) 00:26:25.891 [2024-07-15 10:07:39.415764] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:26:25.891 [2024-07-15 10:07:39.415791] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7e840, cid 0, qid 0 00:26:25.891 [2024-07-15 10:07:39.422683] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.891 [2024-07-15 10:07:39.422693] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.891 [2024-07-15 10:07:39.422696] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.891 [2024-07-15 10:07:39.422698] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7e840) on tqpair=0xc3ba60 00:26:25.891 [2024-07-15 10:07:39.422706] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:26:25.891 [2024-07-15 10:07:39.422713] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:26:25.891 [2024-07-15 10:07:39.422716] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:26:25.891 [2024-07-15 10:07:39.422729] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.891 [2024-07-15 10:07:39.422732] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.891 [2024-07-15 10:07:39.422735] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc3ba60) 00:26:25.891 [2024-07-15 10:07:39.422741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.891 [2024-07-15 10:07:39.422762] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7e840, cid 0, qid 0 00:26:25.891 [2024-07-15 10:07:39.422844] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.891 [2024-07-15 10:07:39.422851] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.891 [2024-07-15 10:07:39.422854] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.891 [2024-07-15 10:07:39.422856] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7e840) on tqpair=0xc3ba60 00:26:25.891 [2024-07-15 10:07:39.422860] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:26:25.891 [2024-07-15 10:07:39.422864] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:26:25.891 [2024-07-15 10:07:39.422869] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.891 [2024-07-15 10:07:39.422872] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.891 [2024-07-15 10:07:39.422874] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc3ba60) 00:26:25.891 [2024-07-15 10:07:39.422879] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.891 [2024-07-15 10:07:39.422891] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7e840, cid 0, qid 0 00:26:25.891 [2024-07-15 10:07:39.422936] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.891 [2024-07-15 10:07:39.422943] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.891 [2024-07-15 10:07:39.422946] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.891 [2024-07-15 10:07:39.422948] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7e840) on tqpair=0xc3ba60 00:26:25.891 [2024-07-15 10:07:39.422952] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:26:25.891 [2024-07-15 10:07:39.422957] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:26:25.891 [2024-07-15 10:07:39.422962] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.891 [2024-07-15 10:07:39.422964] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.891 [2024-07-15 10:07:39.422967] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc3ba60) 00:26:25.891 [2024-07-15 10:07:39.422971] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.891 [2024-07-15 10:07:39.422982] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7e840, cid 0, qid 0 00:26:25.891 [2024-07-15 10:07:39.423022] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.891 [2024-07-15 10:07:39.423029] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.891 [2024-07-15 10:07:39.423031] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.891 [2024-07-15 10:07:39.423034] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7e840) on tqpair=0xc3ba60 00:26:25.891 [2024-07-15 10:07:39.423037] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:26:25.891 [2024-07-15 10:07:39.423043] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.891 [2024-07-15 10:07:39.423046] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.891 [2024-07-15 10:07:39.423048] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc3ba60) 00:26:25.891 [2024-07-15 10:07:39.423053] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.891 [2024-07-15 10:07:39.423063] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7e840, cid 0, qid 0 00:26:25.891 [2024-07-15 10:07:39.423107] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.891 [2024-07-15 10:07:39.423114] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.891 [2024-07-15 10:07:39.423116] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.891 [2024-07-15 10:07:39.423118] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7e840) on tqpair=0xc3ba60 00:26:25.891 [2024-07-15 10:07:39.423121] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:26:25.891 [2024-07-15 10:07:39.423125] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:26:25.891 [2024-07-15 10:07:39.423129] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:26:25.891 [2024-07-15 10:07:39.423233] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:26:25.891 [2024-07-15 10:07:39.423239] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:26:25.891 [2024-07-15 10:07:39.423246] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.891 [2024-07-15 10:07:39.423249] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.891 [2024-07-15 10:07:39.423251] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc3ba60) 00:26:25.891 [2024-07-15 10:07:39.423255] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.891 [2024-07-15 10:07:39.423266] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7e840, cid 0, qid 0 00:26:25.891 [2024-07-15 10:07:39.423308] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.891 [2024-07-15 10:07:39.423315] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.891 [2024-07-15 10:07:39.423318] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.891 [2024-07-15 10:07:39.423320] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7e840) on tqpair=0xc3ba60 00:26:25.891 [2024-07-15 10:07:39.423323] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:26:25.891 [2024-07-15 10:07:39.423329] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.891 [2024-07-15 10:07:39.423332] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.892 [2024-07-15 10:07:39.423334] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc3ba60) 00:26:25.892 [2024-07-15 10:07:39.423339] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.892 [2024-07-15 10:07:39.423349] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7e840, cid 0, qid 0 00:26:25.892 [2024-07-15 10:07:39.423387] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.892 [2024-07-15 10:07:39.423394] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.892 [2024-07-15 10:07:39.423396] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.892 [2024-07-15 10:07:39.423398] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7e840) on tqpair=0xc3ba60 00:26:25.892 [2024-07-15 10:07:39.423401] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:26:25.892 [2024-07-15 10:07:39.423404] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:26:25.892 [2024-07-15 10:07:39.423409] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:26:25.892 [2024-07-15 10:07:39.423416] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:26:25.892 [2024-07-15 10:07:39.423423] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.892 [2024-07-15 10:07:39.423426] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc3ba60) 00:26:25.892 [2024-07-15 10:07:39.423431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.892 [2024-07-15 10:07:39.423441] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7e840, cid 0, qid 0 00:26:25.892 [2024-07-15 10:07:39.423514] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:25.892 [2024-07-15 10:07:39.423521] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:25.892 [2024-07-15 10:07:39.423523] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:25.892 [2024-07-15 10:07:39.423526] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc3ba60): datao=0, datal=4096, cccid=0 00:26:25.892 [2024-07-15 10:07:39.423529] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc7e840) on tqpair(0xc3ba60): expected_datao=0, payload_size=4096 00:26:25.892 [2024-07-15 10:07:39.423532] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.892 [2024-07-15 10:07:39.423538] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:25.892 [2024-07-15 10:07:39.423541] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:25.892 [2024-07-15 10:07:39.423547] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.892 [2024-07-15 10:07:39.423551] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.892 [2024-07-15 10:07:39.423553] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.892 [2024-07-15 10:07:39.423555] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7e840) on tqpair=0xc3ba60 00:26:25.892 [2024-07-15 10:07:39.423562] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:26:25.892 [2024-07-15 10:07:39.423565] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:26:25.892 [2024-07-15 10:07:39.423568] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:26:25.892 [2024-07-15 10:07:39.423572] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:26:25.892 [2024-07-15 10:07:39.423575] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:26:25.892 [2024-07-15 10:07:39.423578] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:26:25.892 [2024-07-15 10:07:39.423585] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:26:25.892 [2024-07-15 10:07:39.423590] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.892 [2024-07-15 10:07:39.423592] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.892 [2024-07-15 10:07:39.423594] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc3ba60) 00:26:25.892 [2024-07-15 10:07:39.423599] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:25.892 [2024-07-15 10:07:39.423610] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7e840, cid 0, qid 0 00:26:25.892 [2024-07-15 10:07:39.423656] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.892 [2024-07-15 10:07:39.423670] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.892 [2024-07-15 10:07:39.423672] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.892 [2024-07-15 10:07:39.423675] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7e840) on tqpair=0xc3ba60 00:26:25.892 [2024-07-15 10:07:39.423681] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.892 [2024-07-15 10:07:39.423683] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.892 [2024-07-15 10:07:39.423685] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc3ba60) 00:26:25.892 [2024-07-15 10:07:39.423690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.892 [2024-07-15 10:07:39.423694] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.892 [2024-07-15 10:07:39.423696] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.892 [2024-07-15 10:07:39.423699] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xc3ba60) 00:26:25.892 [2024-07-15 10:07:39.423703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.892 [2024-07-15 10:07:39.423707] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.892 [2024-07-15 10:07:39.423709] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.892 [2024-07-15 10:07:39.423712] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xc3ba60) 00:26:25.892 [2024-07-15 10:07:39.423716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.892 [2024-07-15 10:07:39.423720] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.892 [2024-07-15 10:07:39.423722] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.892 [2024-07-15 10:07:39.423724] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc3ba60) 00:26:25.892 [2024-07-15 10:07:39.423728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.892 [2024-07-15 10:07:39.423731] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:26:25.892 [2024-07-15 10:07:39.423740] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:26:25.892 [2024-07-15 10:07:39.423745] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.892 [2024-07-15 10:07:39.423747] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc3ba60) 00:26:25.892 [2024-07-15 10:07:39.423752] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.892 [2024-07-15 10:07:39.423765] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7e840, cid 0, qid 0 00:26:25.892 [2024-07-15 10:07:39.423769] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7e9c0, cid 1, qid 0 00:26:25.892 [2024-07-15 10:07:39.423772] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7eb40, cid 2, qid 0 00:26:25.892 [2024-07-15 10:07:39.423775] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7ecc0, cid 3, qid 0 00:26:25.892 [2024-07-15 10:07:39.423778] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7ee40, cid 4, qid 0 00:26:25.892 [2024-07-15 10:07:39.423856] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.892 [2024-07-15 10:07:39.423863] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.892 [2024-07-15 10:07:39.423866] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.892 [2024-07-15 10:07:39.423868] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7ee40) on tqpair=0xc3ba60 00:26:25.892 [2024-07-15 10:07:39.423872] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:26:25.892 [2024-07-15 10:07:39.423878] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:26:25.892 [2024-07-15 10:07:39.423885] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.892 [2024-07-15 10:07:39.423888] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc3ba60) 00:26:25.892 [2024-07-15 10:07:39.423892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.892 [2024-07-15 10:07:39.423903] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7ee40, cid 4, qid 0 00:26:25.892 [2024-07-15 10:07:39.423951] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:25.892 [2024-07-15 10:07:39.423958] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:25.892 [2024-07-15 10:07:39.423960] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:25.892 [2024-07-15 10:07:39.423962] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc3ba60): datao=0, datal=4096, cccid=4 00:26:25.892 [2024-07-15 10:07:39.423965] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc7ee40) on tqpair(0xc3ba60): expected_datao=0, payload_size=4096 00:26:25.892 [2024-07-15 10:07:39.423968] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.892 [2024-07-15 10:07:39.423973] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:25.892 [2024-07-15 10:07:39.423975] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:25.892 [2024-07-15 10:07:39.423980] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.892 [2024-07-15 10:07:39.423984] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.892 [2024-07-15 10:07:39.423987] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.892 [2024-07-15 10:07:39.423989] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7ee40) on tqpair=0xc3ba60 00:26:25.892 [2024-07-15 10:07:39.423998] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:26:25.892 [2024-07-15 10:07:39.424021] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.892 [2024-07-15 10:07:39.424024] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc3ba60) 00:26:25.892 [2024-07-15 10:07:39.424029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.892 [2024-07-15 10:07:39.424034] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.892 [2024-07-15 10:07:39.424036] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.892 [2024-07-15 10:07:39.424039] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc3ba60) 00:26:25.892 [2024-07-15 10:07:39.424043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.892 [2024-07-15 10:07:39.424057] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7ee40, cid 4, qid 0 00:26:25.892 [2024-07-15 10:07:39.424061] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7efc0, cid 5, qid 0 00:26:25.892 [2024-07-15 10:07:39.424139] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:25.892 [2024-07-15 10:07:39.424146] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:25.892 [2024-07-15 10:07:39.424149] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:25.892 [2024-07-15 10:07:39.424151] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc3ba60): datao=0, datal=1024, cccid=4 00:26:25.892 [2024-07-15 10:07:39.424153] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc7ee40) on tqpair(0xc3ba60): expected_datao=0, payload_size=1024 00:26:25.892 [2024-07-15 10:07:39.424156] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.893 [2024-07-15 10:07:39.424161] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:25.893 [2024-07-15 10:07:39.424163] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:25.893 [2024-07-15 10:07:39.424167] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.893 [2024-07-15 10:07:39.424171] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.893 [2024-07-15 10:07:39.424173] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.893 [2024-07-15 10:07:39.424175] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7efc0) on tqpair=0xc3ba60 00:26:25.893 [2024-07-15 10:07:39.464783] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.893 [2024-07-15 10:07:39.464812] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.893 [2024-07-15 10:07:39.464815] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.893 [2024-07-15 10:07:39.464820] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7ee40) on tqpair=0xc3ba60 00:26:25.893 [2024-07-15 10:07:39.464843] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.893 [2024-07-15 10:07:39.464846] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc3ba60) 00:26:25.893 [2024-07-15 10:07:39.464856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.893 [2024-07-15 10:07:39.464891] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7ee40, cid 4, qid 0 00:26:25.893 [2024-07-15 10:07:39.464998] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:25.893 [2024-07-15 10:07:39.465009] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:25.893 [2024-07-15 10:07:39.465012] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:25.893 [2024-07-15 10:07:39.465014] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc3ba60): datao=0, datal=3072, cccid=4 00:26:25.893 [2024-07-15 10:07:39.465017] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc7ee40) on tqpair(0xc3ba60): expected_datao=0, payload_size=3072 00:26:25.893 [2024-07-15 10:07:39.465021] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.893 [2024-07-15 10:07:39.465028] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:25.893 [2024-07-15 10:07:39.465031] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:25.893 [2024-07-15 10:07:39.465037] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:25.893 [2024-07-15 10:07:39.465042] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:25.893 [2024-07-15 10:07:39.465045] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:25.893 [2024-07-15 10:07:39.465047] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7ee40) on tqpair=0xc3ba60 00:26:25.893 [2024-07-15 10:07:39.465056] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:25.893 [2024-07-15 10:07:39.465059] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc3ba60) 00:26:25.893 [2024-07-15 10:07:39.465064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.893 [2024-07-15 10:07:39.465081] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7ee40, cid 4, qid 0 00:26:25.893 [2024-07-15 10:07:39.465132] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:25.893 [2024-07-15 10:07:39.465142] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:25.893 [2024-07-15 10:07:39.465145] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:25.893 [2024-07-15 10:07:39.465147] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc3ba60): datao=0, datal=8, cccid=4 00:26:25.893 [2024-07-15 10:07:39.465151] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc7ee40) on tqpair(0xc3ba60): expected_datao=0, payload_size=8 00:26:25.893 [2024-07-15 10:07:39.465154] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:25.893 [2024-07-15 10:07:39.465159] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:25.893 [2024-07-15 10:07:39.465161] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:26.159 ===================================================== 00:26:26.159 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:26.159 ===================================================== 00:26:26.159 Controller Capabilities/Features 00:26:26.159 ================================ 00:26:26.159 Vendor ID: 0000 00:26:26.159 Subsystem Vendor ID: 0000 00:26:26.159 Serial Number: .................... 00:26:26.159 Model Number: ........................................ 00:26:26.159 Firmware Version: 24.09 00:26:26.159 Recommended Arb Burst: 0 00:26:26.159 IEEE OUI Identifier: 00 00 00 00:26:26.159 Multi-path I/O 00:26:26.159 May have multiple subsystem ports: No 00:26:26.159 May have multiple controllers: No 00:26:26.159 Associated with SR-IOV VF: No 00:26:26.159 Max Data Transfer Size: 131072 00:26:26.159 Max Number of Namespaces: 0 00:26:26.159 Max Number of I/O Queues: 1024 00:26:26.159 NVMe Specification Version (VS): 1.3 00:26:26.159 NVMe Specification Version (Identify): 1.3 00:26:26.159 Maximum Queue Entries: 128 00:26:26.159 Contiguous Queues Required: Yes 00:26:26.159 Arbitration Mechanisms Supported 00:26:26.159 Weighted Round Robin: Not Supported 00:26:26.159 Vendor Specific: Not Supported 00:26:26.159 Reset Timeout: 15000 ms 00:26:26.159 Doorbell Stride: 4 bytes 00:26:26.159 NVM Subsystem Reset: Not Supported 00:26:26.159 Command Sets Supported 00:26:26.159 NVM Command Set: Supported 00:26:26.159 Boot Partition: Not Supported 00:26:26.160 Memory Page Size Minimum: 4096 bytes 00:26:26.160 Memory Page Size Maximum: 4096 bytes 00:26:26.160 Persistent Memory Region: Not Supported 00:26:26.160 Optional Asynchronous Events Supported 00:26:26.160 Namespace Attribute Notices: Not Supported 00:26:26.160 Firmware Activation Notices: Not Supported 00:26:26.160 ANA Change Notices: Not Supported 00:26:26.160 PLE Aggregate Log Change Notices: Not Supported 00:26:26.160 LBA Status Info Alert Notices: Not Supported 00:26:26.160 EGE Aggregate Log Change Notices: Not Supported 00:26:26.160 Normal NVM Subsystem Shutdown event: Not Supported 00:26:26.160 Zone Descriptor Change Notices: Not Supported 00:26:26.160 Discovery Log Change Notices: Supported 00:26:26.160 Controller Attributes 00:26:26.160 128-bit Host Identifier: Not Supported 00:26:26.160 Non-Operational Permissive Mode: Not Supported 00:26:26.160 NVM Sets: Not Supported 00:26:26.160 Read Recovery Levels: Not Supported 00:26:26.160 Endurance Groups: Not Supported 00:26:26.160 Predictable Latency Mode: Not Supported 00:26:26.160 Traffic Based Keep ALive: Not Supported 00:26:26.160 Namespace Granularity: Not Supported 00:26:26.160 SQ Associations: Not Supported 00:26:26.160 UUID List: Not Supported 00:26:26.160 Multi-Domain Subsystem: Not Supported 00:26:26.160 Fixed Capacity Management: Not Supported 00:26:26.160 Variable Capacity Management: Not Supported 00:26:26.160 Delete Endurance Group: Not Supported 00:26:26.160 Delete NVM Set: Not Supported 00:26:26.160 Extended LBA Formats Supported: Not Supported 00:26:26.160 Flexible Data Placement Supported: Not Supported 00:26:26.160 00:26:26.160 Controller Memory Buffer Support 00:26:26.160 ================================ 00:26:26.160 Supported: No 00:26:26.160 00:26:26.160 Persistent Memory Region Support 00:26:26.160 ================================ 00:26:26.160 Supported: No 00:26:26.160 00:26:26.160 Admin Command Set Attributes 00:26:26.160 ============================ 00:26:26.160 Security Send/Receive: Not Supported 00:26:26.160 Format NVM: Not Supported 00:26:26.160 Firmware Activate/Download: Not Supported 00:26:26.160 Namespace Management: Not Supported 00:26:26.160 Device Self-Test: Not Supported 00:26:26.160 Directives: Not Supported 00:26:26.160 NVMe-MI: Not Supported 00:26:26.160 Virtualization Management: Not Supported 00:26:26.160 Doorbell Buffer Config: Not Supported 00:26:26.160 Get LBA Status Capability: Not Supported 00:26:26.160 Command & Feature Lockdown Capability: Not Supported 00:26:26.160 Abort Command Limit: 1 00:26:26.160 Async Event Request Limit: 4 00:26:26.160 Number of Firmware Slots: N/A 00:26:26.160 Firmware Slot 1 Read-Only: N/A 00:26:26.160 Firmware Activation Without Reset: N/A 00:26:26.160 Multiple Update Detection Support: N/A 00:26:26.160 Firmware Update Granularity: No Information Provided 00:26:26.160 Per-Namespace SMART Log: No 00:26:26.160 Asymmetric Namespace Access Log Page: Not Supported 00:26:26.160 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:26.160 Command Effects Log Page: Not Supported 00:26:26.160 Get Log Page Extended Data: Supported 00:26:26.160 Telemetry Log Pages: Not Supported 00:26:26.160 Persistent Event Log Pages: Not Supported 00:26:26.160 Supported Log Pages Log Page: May Support 00:26:26.160 Commands Supported & Effects Log Page: Not Supported 00:26:26.160 Feature Identifiers & Effects Log Page:May Support 00:26:26.160 NVMe-MI Commands & Effects Log Page: May Support 00:26:26.160 Data Area 4 for Telemetry Log: Not Supported 00:26:26.160 Error Log Page Entries Supported: 128 00:26:26.160 Keep Alive: Not Supported 00:26:26.160 00:26:26.160 NVM Command Set Attributes 00:26:26.160 ========================== 00:26:26.160 Submission Queue Entry Size 00:26:26.160 Max: 1 00:26:26.160 Min: 1 00:26:26.160 Completion Queue Entry Size 00:26:26.160 Max: 1 00:26:26.160 Min: 1 00:26:26.160 Number of Namespaces: 0 00:26:26.160 Compare Command: Not Supported 00:26:26.160 Write Uncorrectable Command: Not Supported 00:26:26.160 Dataset Management Command: Not Supported 00:26:26.160 Write Zeroes Command: Not Supported 00:26:26.160 Set Features Save Field: Not Supported 00:26:26.160 Reservations: Not Supported 00:26:26.160 Timestamp: Not Supported 00:26:26.160 Copy: Not Supported 00:26:26.160 Volatile Write Cache: Not Present 00:26:26.160 Atomic Write Unit (Normal): 1 00:26:26.160 Atomic Write Unit (PFail): 1 00:26:26.160 Atomic Compare & Write Unit: 1 00:26:26.160 Fused Compare & Write: Supported 00:26:26.160 Scatter-Gather List 00:26:26.160 SGL Command Set: Supported 00:26:26.160 SGL Keyed: Supported 00:26:26.160 SGL Bit Bucket Descriptor: Not Supported 00:26:26.160 SGL Metadata Pointer: Not Supported 00:26:26.160 Oversized SGL: Not Supported 00:26:26.160 SGL Metadata Address: Not Supported 00:26:26.160 SGL Offset: Supported 00:26:26.160 Transport SGL Data Block: Not Supported 00:26:26.160 Replay Protected Memory Block: Not Supported 00:26:26.160 00:26:26.160 Firmware Slot Information 00:26:26.160 ========================= 00:26:26.160 Active slot: 0 00:26:26.160 00:26:26.160 00:26:26.160 Error Log 00:26:26.160 ========= 00:26:26.160 00:26:26.160 Active Namespaces 00:26:26.160 ================= 00:26:26.160 Discovery Log Page 00:26:26.160 ================== 00:26:26.160 Generation Counter: 2 00:26:26.160 Number of Records: 2 00:26:26.160 Record Format: 0 00:26:26.160 00:26:26.160 Discovery Log Entry 0 00:26:26.160 ---------------------- 00:26:26.160 Transport Type: 3 (TCP) 00:26:26.160 Address Family: 1 (IPv4) 00:26:26.160 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:26.160 Entry Flags: 00:26:26.160 Duplicate Returned Information: 1 00:26:26.160 Explicit Persistent Connection Support for Discovery: 1 00:26:26.160 Transport Requirements: 00:26:26.160 Secure Channel: Not Required 00:26:26.160 Port ID: 0 (0x0000) 00:26:26.160 Controller ID: 65535 (0xffff) 00:26:26.160 Admin Max SQ Size: 128 00:26:26.160 Transport Service Identifier: 4420 00:26:26.160 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:26.160 Transport Address: 10.0.0.2 00:26:26.160 Discovery Log Entry 1 00:26:26.160 ---------------------- 00:26:26.160 Transport Type: 3 (TCP) 00:26:26.160 Address Family: 1 (IPv4) 00:26:26.160 Subsystem Type: 2 (NVM Subsystem) 00:26:26.160 Entry Flags: 00:26:26.160 Duplicate Returned Information: 0 00:26:26.160 Explicit Persistent Connection Support for Discovery: 0 00:26:26.160 Transport Requirements: 00:26:26.160 Secure Channel: Not Required 00:26:26.160 Port ID: 0 (0x0000) 00:26:26.160 Controller ID: 65535 (0xffff) 00:26:26.160 Admin Max SQ Size: 128 00:26:26.160 Transport Service Identifier: 4420 00:26:26.160 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:26:26.160 Transport Address: 10.0.0.2 [2024-07-15 10:07:39.507692] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.160 [2024-07-15 10:07:39.507717] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.160 [2024-07-15 10:07:39.507720] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.160 [2024-07-15 10:07:39.507724] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7ee40) on tqpair=0xc3ba60 00:26:26.160 [2024-07-15 10:07:39.507838] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:26:26.160 [2024-07-15 10:07:39.507847] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7e840) on tqpair=0xc3ba60 00:26:26.160 [2024-07-15 10:07:39.507853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.160 [2024-07-15 10:07:39.507858] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7e9c0) on tqpair=0xc3ba60 00:26:26.160 [2024-07-15 10:07:39.507861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.160 [2024-07-15 10:07:39.507864] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7eb40) on tqpair=0xc3ba60 00:26:26.160 [2024-07-15 10:07:39.507868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.160 [2024-07-15 10:07:39.507871] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7ecc0) on tqpair=0xc3ba60 00:26:26.160 [2024-07-15 10:07:39.507874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.160 [2024-07-15 10:07:39.507886] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.160 [2024-07-15 10:07:39.507889] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.160 [2024-07-15 10:07:39.507892] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc3ba60) 00:26:26.160 [2024-07-15 10:07:39.507900] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.160 [2024-07-15 10:07:39.507919] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7ecc0, cid 3, qid 0 00:26:26.160 [2024-07-15 10:07:39.507990] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.160 [2024-07-15 10:07:39.508000] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.160 [2024-07-15 10:07:39.508003] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.160 [2024-07-15 10:07:39.508006] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7ecc0) on tqpair=0xc3ba60 00:26:26.160 [2024-07-15 10:07:39.508011] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.160 [2024-07-15 10:07:39.508014] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.160 [2024-07-15 10:07:39.508016] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc3ba60) 00:26:26.160 [2024-07-15 10:07:39.508021] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.160 [2024-07-15 10:07:39.508036] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7ecc0, cid 3, qid 0 00:26:26.161 [2024-07-15 10:07:39.508102] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.161 [2024-07-15 10:07:39.508111] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.161 [2024-07-15 10:07:39.508113] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.161 [2024-07-15 10:07:39.508116] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7ecc0) on tqpair=0xc3ba60 00:26:26.161 [2024-07-15 10:07:39.508120] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:26:26.161 [2024-07-15 10:07:39.508123] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:26:26.161 [2024-07-15 10:07:39.508129] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.161 [2024-07-15 10:07:39.508132] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.161 [2024-07-15 10:07:39.508134] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc3ba60) 00:26:26.161 [2024-07-15 10:07:39.508138] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.161 [2024-07-15 10:07:39.508149] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7ecc0, cid 3, qid 0 00:26:26.161 [2024-07-15 10:07:39.508189] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.161 [2024-07-15 10:07:39.508196] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.161 [2024-07-15 10:07:39.508199] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.161 [2024-07-15 10:07:39.508201] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7ecc0) on tqpair=0xc3ba60 00:26:26.161 [2024-07-15 10:07:39.508209] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.161 [2024-07-15 10:07:39.508212] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.161 [2024-07-15 10:07:39.508214] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc3ba60) 00:26:26.161 [2024-07-15 10:07:39.508219] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.161 [2024-07-15 10:07:39.508229] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7ecc0, cid 3, qid 0 00:26:26.161 [2024-07-15 10:07:39.508271] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.161 [2024-07-15 10:07:39.508278] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.161 [2024-07-15 10:07:39.508281] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.161 [2024-07-15 10:07:39.508283] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7ecc0) on tqpair=0xc3ba60 00:26:26.161 [2024-07-15 10:07:39.508290] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.161 [2024-07-15 10:07:39.508292] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.161 [2024-07-15 10:07:39.508294] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc3ba60) 00:26:26.161 [2024-07-15 10:07:39.508299] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.161 [2024-07-15 10:07:39.508310] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7ecc0, cid 3, qid 0 00:26:26.161 [2024-07-15 10:07:39.508358] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.161 [2024-07-15 10:07:39.508365] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.161 [2024-07-15 10:07:39.508367] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.161 [2024-07-15 10:07:39.508370] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7ecc0) on tqpair=0xc3ba60 00:26:26.161 [2024-07-15 10:07:39.508376] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.161 [2024-07-15 10:07:39.508379] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.161 [2024-07-15 10:07:39.508382] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc3ba60) 00:26:26.161 [2024-07-15 10:07:39.508386] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.161 [2024-07-15 10:07:39.508397] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7ecc0, cid 3, qid 0 00:26:26.161 [2024-07-15 10:07:39.508438] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.161 [2024-07-15 10:07:39.508442] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.161 [2024-07-15 10:07:39.508444] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.161 [2024-07-15 10:07:39.508447] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7ecc0) on tqpair=0xc3ba60 00:26:26.161 [2024-07-15 10:07:39.508453] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.161 [2024-07-15 10:07:39.508456] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.161 [2024-07-15 10:07:39.508458] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc3ba60) 00:26:26.161 [2024-07-15 10:07:39.508463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.161 [2024-07-15 10:07:39.508473] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7ecc0, cid 3, qid 0 00:26:26.161 [2024-07-15 10:07:39.508512] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.161 [2024-07-15 10:07:39.508519] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.161 [2024-07-15 10:07:39.508522] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.161 [2024-07-15 10:07:39.508524] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7ecc0) on tqpair=0xc3ba60 00:26:26.161 [2024-07-15 10:07:39.508531] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.161 [2024-07-15 10:07:39.508533] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.161 [2024-07-15 10:07:39.508536] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc3ba60) 00:26:26.161 [2024-07-15 10:07:39.508540] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.161 [2024-07-15 10:07:39.508551] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7ecc0, cid 3, qid 0 00:26:26.161 [2024-07-15 10:07:39.508591] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.161 [2024-07-15 10:07:39.508598] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.161 [2024-07-15 10:07:39.508601] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.161 [2024-07-15 10:07:39.508603] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7ecc0) on tqpair=0xc3ba60 00:26:26.161 [2024-07-15 10:07:39.508610] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.161 [2024-07-15 10:07:39.508612] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.161 [2024-07-15 10:07:39.508615] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc3ba60) 00:26:26.161 [2024-07-15 10:07:39.508619] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.161 [2024-07-15 10:07:39.508630] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7ecc0, cid 3, qid 0 00:26:26.161 [2024-07-15 10:07:39.508721] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.161 [2024-07-15 10:07:39.508731] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.161 [2024-07-15 10:07:39.508734] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.161 [2024-07-15 10:07:39.508737] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7ecc0) on tqpair=0xc3ba60 00:26:26.161 [2024-07-15 10:07:39.508744] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.161 [2024-07-15 10:07:39.508747] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.161 [2024-07-15 10:07:39.508750] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc3ba60) 00:26:26.161 [2024-07-15 10:07:39.508755] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.161 [2024-07-15 10:07:39.508768] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7ecc0, cid 3, qid 0 00:26:26.161 [2024-07-15 10:07:39.508811] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.161 [2024-07-15 10:07:39.508819] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.161 [2024-07-15 10:07:39.508821] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.161 [2024-07-15 10:07:39.508824] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7ecc0) on tqpair=0xc3ba60 00:26:26.161 [2024-07-15 10:07:39.508832] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.161 [2024-07-15 10:07:39.508834] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.161 [2024-07-15 10:07:39.508837] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc3ba60) 00:26:26.161 [2024-07-15 10:07:39.508842] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.161 [2024-07-15 10:07:39.508854] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7ecc0, cid 3, qid 0 00:26:26.161 [2024-07-15 10:07:39.508896] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.161 [2024-07-15 10:07:39.508904] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.161 [2024-07-15 10:07:39.508907] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.161 [2024-07-15 10:07:39.508909] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7ecc0) on tqpair=0xc3ba60 00:26:26.161 [2024-07-15 10:07:39.508917] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.161 [2024-07-15 10:07:39.508920] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.161 [2024-07-15 10:07:39.508922] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc3ba60) 00:26:26.161 [2024-07-15 10:07:39.508927] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.161 [2024-07-15 10:07:39.508940] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7ecc0, cid 3, qid 0 00:26:26.161 [2024-07-15 10:07:39.508983] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.161 [2024-07-15 10:07:39.508988] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.161 [2024-07-15 10:07:39.508990] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.161 [2024-07-15 10:07:39.508993] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7ecc0) on tqpair=0xc3ba60 00:26:26.161 [2024-07-15 10:07:39.509000] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.161 [2024-07-15 10:07:39.509003] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.161 [2024-07-15 10:07:39.509006] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc3ba60) 00:26:26.161 [2024-07-15 10:07:39.509011] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.161 [2024-07-15 10:07:39.509023] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7ecc0, cid 3, qid 0 00:26:26.161 [2024-07-15 10:07:39.509071] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.161 [2024-07-15 10:07:39.509079] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.161 [2024-07-15 10:07:39.509081] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.161 [2024-07-15 10:07:39.509084] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7ecc0) on tqpair=0xc3ba60 00:26:26.161 [2024-07-15 10:07:39.509092] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.161 [2024-07-15 10:07:39.509095] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.161 [2024-07-15 10:07:39.509097] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc3ba60) 00:26:26.161 [2024-07-15 10:07:39.509102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.161 [2024-07-15 10:07:39.509114] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7ecc0, cid 3, qid 0 00:26:26.161 [2024-07-15 10:07:39.509159] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.161 [2024-07-15 10:07:39.509168] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.161 [2024-07-15 10:07:39.509170] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.161 [2024-07-15 10:07:39.509173] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7ecc0) on tqpair=0xc3ba60 00:26:26.161 [2024-07-15 10:07:39.509181] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.162 [2024-07-15 10:07:39.509184] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.162 [2024-07-15 10:07:39.509186] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc3ba60) 00:26:26.162 [2024-07-15 10:07:39.509191] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.162 [2024-07-15 10:07:39.509203] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7ecc0, cid 3, qid 0 00:26:26.162 [2024-07-15 10:07:39.509249] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.162 [2024-07-15 10:07:39.509257] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.162 [2024-07-15 10:07:39.509260] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.162 [2024-07-15 10:07:39.509262] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7ecc0) on tqpair=0xc3ba60 00:26:26.162 [2024-07-15 10:07:39.509269] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.162 [2024-07-15 10:07:39.509273] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.162 [2024-07-15 10:07:39.509275] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc3ba60) 00:26:26.162 [2024-07-15 10:07:39.509281] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.162 [2024-07-15 10:07:39.509293] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7ecc0, cid 3, qid 0 00:26:26.162 [2024-07-15 10:07:39.509340] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.162 [2024-07-15 10:07:39.509345] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.162 [2024-07-15 10:07:39.509348] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.162 [2024-07-15 10:07:39.509351] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7ecc0) on tqpair=0xc3ba60 00:26:26.162 [2024-07-15 10:07:39.509358] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.162 [2024-07-15 10:07:39.509361] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.162 [2024-07-15 10:07:39.509364] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc3ba60) 00:26:26.162 [2024-07-15 10:07:39.509369] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.162 [2024-07-15 10:07:39.509381] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7ecc0, cid 3, qid 0 00:26:26.162 [2024-07-15 10:07:39.509421] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.162 [2024-07-15 10:07:39.509427] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.162 [2024-07-15 10:07:39.509429] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.162 [2024-07-15 10:07:39.509432] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7ecc0) on tqpair=0xc3ba60 00:26:26.162 [2024-07-15 10:07:39.509440] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.162 [2024-07-15 10:07:39.509443] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.162 [2024-07-15 10:07:39.509445] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc3ba60) 00:26:26.162 [2024-07-15 10:07:39.509450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.162 [2024-07-15 10:07:39.509462] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7ecc0, cid 3, qid 0 00:26:26.162 [2024-07-15 10:07:39.509509] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.162 [2024-07-15 10:07:39.509517] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.162 [2024-07-15 10:07:39.509520] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.162 [2024-07-15 10:07:39.509523] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7ecc0) on tqpair=0xc3ba60 00:26:26.162 [2024-07-15 10:07:39.509530] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.162 [2024-07-15 10:07:39.509533] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.162 [2024-07-15 10:07:39.509536] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc3ba60) 00:26:26.162 [2024-07-15 10:07:39.509541] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.162 [2024-07-15 10:07:39.509553] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7ecc0, cid 3, qid 0 00:26:26.162 [2024-07-15 10:07:39.509595] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.162 [2024-07-15 10:07:39.509604] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.162 [2024-07-15 10:07:39.509606] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.162 [2024-07-15 10:07:39.509609] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7ecc0) on tqpair=0xc3ba60 00:26:26.162 [2024-07-15 10:07:39.509616] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.162 [2024-07-15 10:07:39.509619] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.162 [2024-07-15 10:07:39.509622] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc3ba60) 00:26:26.162 [2024-07-15 10:07:39.509627] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.162 [2024-07-15 10:07:39.509639] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7ecc0, cid 3, qid 0 00:26:26.162 [2024-07-15 10:07:39.509692] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.162 [2024-07-15 10:07:39.509697] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.162 [2024-07-15 10:07:39.509700] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.162 [2024-07-15 10:07:39.509702] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7ecc0) on tqpair=0xc3ba60 00:26:26.162 [2024-07-15 10:07:39.509710] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.162 [2024-07-15 10:07:39.509713] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.162 [2024-07-15 10:07:39.509715] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc3ba60) 00:26:26.162 [2024-07-15 10:07:39.509720] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.162 [2024-07-15 10:07:39.509733] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7ecc0, cid 3, qid 0 00:26:26.162 [2024-07-15 10:07:39.509781] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.162 [2024-07-15 10:07:39.509786] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.162 [2024-07-15 10:07:39.509788] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.162 [2024-07-15 10:07:39.509791] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7ecc0) on tqpair=0xc3ba60 00:26:26.162 [2024-07-15 10:07:39.509798] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.162 [2024-07-15 10:07:39.509801] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.162 [2024-07-15 10:07:39.509804] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc3ba60) 00:26:26.162 [2024-07-15 10:07:39.509809] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.162 [2024-07-15 10:07:39.509821] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7ecc0, cid 3, qid 0 00:26:26.162 [2024-07-15 10:07:39.509863] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.162 [2024-07-15 10:07:39.509869] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.162 [2024-07-15 10:07:39.509871] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.162 [2024-07-15 10:07:39.509874] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7ecc0) on tqpair=0xc3ba60 00:26:26.162 [2024-07-15 10:07:39.509882] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.162 [2024-07-15 10:07:39.509885] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.162 [2024-07-15 10:07:39.509887] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc3ba60) 00:26:26.162 [2024-07-15 10:07:39.509892] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.162 [2024-07-15 10:07:39.509904] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7ecc0, cid 3, qid 0 00:26:26.162 [2024-07-15 10:07:39.509947] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.162 [2024-07-15 10:07:39.509952] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.162 [2024-07-15 10:07:39.509954] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.162 [2024-07-15 10:07:39.509957] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7ecc0) on tqpair=0xc3ba60 00:26:26.162 [2024-07-15 10:07:39.509965] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.162 [2024-07-15 10:07:39.509968] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.162 [2024-07-15 10:07:39.509970] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc3ba60) 00:26:26.162 [2024-07-15 10:07:39.509976] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.162 [2024-07-15 10:07:39.509999] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7ecc0, cid 3, qid 0 00:26:26.162 [2024-07-15 10:07:39.510043] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.162 [2024-07-15 10:07:39.510049] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.162 [2024-07-15 10:07:39.510051] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.162 [2024-07-15 10:07:39.510053] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7ecc0) on tqpair=0xc3ba60 00:26:26.162 [2024-07-15 10:07:39.510060] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.162 [2024-07-15 10:07:39.510063] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.162 [2024-07-15 10:07:39.510065] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc3ba60) 00:26:26.162 [2024-07-15 10:07:39.510070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.162 [2024-07-15 10:07:39.510080] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7ecc0, cid 3, qid 0 00:26:26.162 [2024-07-15 10:07:39.510118] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.162 [2024-07-15 10:07:39.510125] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.162 [2024-07-15 10:07:39.510127] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.162 [2024-07-15 10:07:39.510130] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7ecc0) on tqpair=0xc3ba60 00:26:26.162 [2024-07-15 10:07:39.510136] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.162 [2024-07-15 10:07:39.510139] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.162 [2024-07-15 10:07:39.510141] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc3ba60) 00:26:26.162 [2024-07-15 10:07:39.510146] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.162 [2024-07-15 10:07:39.510156] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7ecc0, cid 3, qid 0 00:26:26.162 [2024-07-15 10:07:39.510194] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.162 [2024-07-15 10:07:39.510199] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.162 [2024-07-15 10:07:39.510201] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.162 [2024-07-15 10:07:39.510203] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7ecc0) on tqpair=0xc3ba60 00:26:26.162 [2024-07-15 10:07:39.510209] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.162 [2024-07-15 10:07:39.510212] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.162 [2024-07-15 10:07:39.510214] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc3ba60) 00:26:26.162 [2024-07-15 10:07:39.510219] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.162 [2024-07-15 10:07:39.510230] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7ecc0, cid 3, qid 0 00:26:26.162 [2024-07-15 10:07:39.510269] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.162 [2024-07-15 10:07:39.510274] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.162 [2024-07-15 10:07:39.510277] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.162 [2024-07-15 10:07:39.510279] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7ecc0) on tqpair=0xc3ba60 00:26:26.163 [2024-07-15 10:07:39.510285] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.163 [2024-07-15 10:07:39.510288] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.163 [2024-07-15 10:07:39.510290] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc3ba60) 00:26:26.163 [2024-07-15 10:07:39.510295] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.163 [2024-07-15 10:07:39.510305] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7ecc0, cid 3, qid 0 00:26:26.163 [2024-07-15 10:07:39.510342] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.163 [2024-07-15 10:07:39.510346] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.163 [2024-07-15 10:07:39.510348] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.163 [2024-07-15 10:07:39.510350] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7ecc0) on tqpair=0xc3ba60 00:26:26.163 [2024-07-15 10:07:39.510357] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.163 [2024-07-15 10:07:39.510360] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.163 [2024-07-15 10:07:39.510362] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc3ba60) 00:26:26.163 [2024-07-15 10:07:39.510366] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.163 [2024-07-15 10:07:39.510377] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7ecc0, cid 3, qid 0 00:26:26.163 [2024-07-15 10:07:39.510420] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.163 [2024-07-15 10:07:39.510424] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.163 [2024-07-15 10:07:39.510426] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.163 [2024-07-15 10:07:39.510429] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7ecc0) on tqpair=0xc3ba60 00:26:26.163 [2024-07-15 10:07:39.510435] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.163 [2024-07-15 10:07:39.510438] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.163 [2024-07-15 10:07:39.510440] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc3ba60) 00:26:26.163 [2024-07-15 10:07:39.510445] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.163 [2024-07-15 10:07:39.510455] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7ecc0, cid 3, qid 0 00:26:26.163 [2024-07-15 10:07:39.510496] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.163 [2024-07-15 10:07:39.510501] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.163 [2024-07-15 10:07:39.510503] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.163 [2024-07-15 10:07:39.510505] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7ecc0) on tqpair=0xc3ba60 00:26:26.163 [2024-07-15 10:07:39.510512] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.163 [2024-07-15 10:07:39.510514] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.163 [2024-07-15 10:07:39.510517] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc3ba60) 00:26:26.163 [2024-07-15 10:07:39.510521] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.163 [2024-07-15 10:07:39.510532] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7ecc0, cid 3, qid 0 00:26:26.163 [2024-07-15 10:07:39.510571] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.163 [2024-07-15 10:07:39.510576] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.163 [2024-07-15 10:07:39.510578] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.163 [2024-07-15 10:07:39.510581] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7ecc0) on tqpair=0xc3ba60 00:26:26.163 [2024-07-15 10:07:39.510587] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.163 [2024-07-15 10:07:39.510590] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.163 [2024-07-15 10:07:39.510592] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc3ba60) 00:26:26.163 [2024-07-15 10:07:39.510596] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.163 [2024-07-15 10:07:39.510607] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7ecc0, cid 3, qid 0 00:26:26.163 [2024-07-15 10:07:39.510645] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.163 [2024-07-15 10:07:39.510649] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.163 [2024-07-15 10:07:39.510651] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.163 [2024-07-15 10:07:39.510654] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7ecc0) on tqpair=0xc3ba60 00:26:26.163 [2024-07-15 10:07:39.510666] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.163 [2024-07-15 10:07:39.510669] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.163 [2024-07-15 10:07:39.510671] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc3ba60) 00:26:26.163 [2024-07-15 10:07:39.510676] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.163 [2024-07-15 10:07:39.510688] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7ecc0, cid 3, qid 0 00:26:26.163 [2024-07-15 10:07:39.510725] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.163 [2024-07-15 10:07:39.510730] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.163 [2024-07-15 10:07:39.510732] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.163 [2024-07-15 10:07:39.510735] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7ecc0) on tqpair=0xc3ba60 00:26:26.163 [2024-07-15 10:07:39.510741] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.163 [2024-07-15 10:07:39.510744] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.163 [2024-07-15 10:07:39.510746] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc3ba60) 00:26:26.163 [2024-07-15 10:07:39.510750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.163 [2024-07-15 10:07:39.510761] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7ecc0, cid 3, qid 0 00:26:26.163 [2024-07-15 10:07:39.510798] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.163 [2024-07-15 10:07:39.510803] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.163 [2024-07-15 10:07:39.510805] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.163 [2024-07-15 10:07:39.510808] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7ecc0) on tqpair=0xc3ba60 00:26:26.163 [2024-07-15 10:07:39.510814] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.163 [2024-07-15 10:07:39.510817] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.163 [2024-07-15 10:07:39.510819] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc3ba60) 00:26:26.163 [2024-07-15 10:07:39.510824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.163 [2024-07-15 10:07:39.510834] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7ecc0, cid 3, qid 0 00:26:26.163 [2024-07-15 10:07:39.510872] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.163 [2024-07-15 10:07:39.510880] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.163 [2024-07-15 10:07:39.510882] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.163 [2024-07-15 10:07:39.510884] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7ecc0) on tqpair=0xc3ba60 00:26:26.163 [2024-07-15 10:07:39.510891] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.163 [2024-07-15 10:07:39.510893] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.163 [2024-07-15 10:07:39.510895] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc3ba60) 00:26:26.163 [2024-07-15 10:07:39.510900] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.163 [2024-07-15 10:07:39.510911] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7ecc0, cid 3, qid 0 00:26:26.163 [2024-07-15 10:07:39.510947] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.163 [2024-07-15 10:07:39.510952] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.163 [2024-07-15 10:07:39.510954] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.163 [2024-07-15 10:07:39.510956] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7ecc0) on tqpair=0xc3ba60 00:26:26.163 [2024-07-15 10:07:39.510963] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.163 [2024-07-15 10:07:39.510966] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.163 [2024-07-15 10:07:39.510968] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc3ba60) 00:26:26.163 [2024-07-15 10:07:39.510972] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.163 [2024-07-15 10:07:39.510983] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7ecc0, cid 3, qid 0 00:26:26.163 [2024-07-15 10:07:39.511022] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.163 [2024-07-15 10:07:39.511027] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.163 [2024-07-15 10:07:39.511029] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.163 [2024-07-15 10:07:39.511032] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7ecc0) on tqpair=0xc3ba60 00:26:26.163 [2024-07-15 10:07:39.511038] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.163 [2024-07-15 10:07:39.511041] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.163 [2024-07-15 10:07:39.511043] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc3ba60) 00:26:26.163 [2024-07-15 10:07:39.511048] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.163 [2024-07-15 10:07:39.511058] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7ecc0, cid 3, qid 0 00:26:26.163 [2024-07-15 10:07:39.511098] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.163 [2024-07-15 10:07:39.511102] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.163 [2024-07-15 10:07:39.511104] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.163 [2024-07-15 10:07:39.511106] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7ecc0) on tqpair=0xc3ba60 00:26:26.163 [2024-07-15 10:07:39.511113] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.163 [2024-07-15 10:07:39.511116] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.163 [2024-07-15 10:07:39.511118] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc3ba60) 00:26:26.163 [2024-07-15 10:07:39.511123] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.163 [2024-07-15 10:07:39.511133] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7ecc0, cid 3, qid 0 00:26:26.163 [2024-07-15 10:07:39.511178] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.163 [2024-07-15 10:07:39.511182] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.163 [2024-07-15 10:07:39.511184] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.163 [2024-07-15 10:07:39.511187] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7ecc0) on tqpair=0xc3ba60 00:26:26.163 [2024-07-15 10:07:39.511193] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.163 [2024-07-15 10:07:39.511196] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.163 [2024-07-15 10:07:39.511198] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc3ba60) 00:26:26.163 [2024-07-15 10:07:39.511203] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.163 [2024-07-15 10:07:39.511213] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7ecc0, cid 3, qid 0 00:26:26.163 [2024-07-15 10:07:39.511253] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.163 [2024-07-15 10:07:39.511258] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.163 [2024-07-15 10:07:39.511260] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.164 [2024-07-15 10:07:39.511262] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7ecc0) on tqpair=0xc3ba60 00:26:26.164 [2024-07-15 10:07:39.511269] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.164 [2024-07-15 10:07:39.511272] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.164 [2024-07-15 10:07:39.511274] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc3ba60) 00:26:26.164 [2024-07-15 10:07:39.511278] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.164 [2024-07-15 10:07:39.511289] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7ecc0, cid 3, qid 0 00:26:26.164 [2024-07-15 10:07:39.511327] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.164 [2024-07-15 10:07:39.511331] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.164 [2024-07-15 10:07:39.511334] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.164 [2024-07-15 10:07:39.511336] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7ecc0) on tqpair=0xc3ba60 00:26:26.164 [2024-07-15 10:07:39.511342] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.164 [2024-07-15 10:07:39.511345] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.164 [2024-07-15 10:07:39.511347] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc3ba60) 00:26:26.164 [2024-07-15 10:07:39.511352] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.164 [2024-07-15 10:07:39.511362] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7ecc0, cid 3, qid 0 00:26:26.164 [2024-07-15 10:07:39.511400] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.164 [2024-07-15 10:07:39.511404] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.164 [2024-07-15 10:07:39.511406] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.164 [2024-07-15 10:07:39.511409] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7ecc0) on tqpair=0xc3ba60 00:26:26.164 [2024-07-15 10:07:39.511415] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.164 [2024-07-15 10:07:39.511418] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.164 [2024-07-15 10:07:39.511420] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc3ba60) 00:26:26.164 [2024-07-15 10:07:39.511425] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.164 [2024-07-15 10:07:39.511435] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7ecc0, cid 3, qid 0 00:26:26.164 [2024-07-15 10:07:39.511475] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.164 [2024-07-15 10:07:39.511480] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.164 [2024-07-15 10:07:39.511482] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.164 [2024-07-15 10:07:39.511484] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7ecc0) on tqpair=0xc3ba60 00:26:26.164 [2024-07-15 10:07:39.511490] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.164 [2024-07-15 10:07:39.511493] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.164 [2024-07-15 10:07:39.511495] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc3ba60) 00:26:26.164 [2024-07-15 10:07:39.511500] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.164 [2024-07-15 10:07:39.511510] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7ecc0, cid 3, qid 0 00:26:26.164 [2024-07-15 10:07:39.511550] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.164 [2024-07-15 10:07:39.511555] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.164 [2024-07-15 10:07:39.511557] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.164 [2024-07-15 10:07:39.511559] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7ecc0) on tqpair=0xc3ba60 00:26:26.164 [2024-07-15 10:07:39.511566] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.164 [2024-07-15 10:07:39.511568] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.164 [2024-07-15 10:07:39.511571] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc3ba60) 00:26:26.164 [2024-07-15 10:07:39.511575] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.164 [2024-07-15 10:07:39.511586] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7ecc0, cid 3, qid 0 00:26:26.164 [2024-07-15 10:07:39.511626] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.164 [2024-07-15 10:07:39.511630] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.164 [2024-07-15 10:07:39.511632] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.164 [2024-07-15 10:07:39.511635] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7ecc0) on tqpair=0xc3ba60 00:26:26.164 [2024-07-15 10:07:39.511641] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.164 [2024-07-15 10:07:39.511643] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.164 [2024-07-15 10:07:39.511646] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc3ba60) 00:26:26.164 [2024-07-15 10:07:39.511650] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.164 [2024-07-15 10:07:39.515667] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7ecc0, cid 3, qid 0 00:26:26.164 [2024-07-15 10:07:39.515688] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.164 [2024-07-15 10:07:39.515693] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.164 [2024-07-15 10:07:39.515695] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.164 [2024-07-15 10:07:39.515698] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7ecc0) on tqpair=0xc3ba60 00:26:26.164 [2024-07-15 10:07:39.515707] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.164 [2024-07-15 10:07:39.515710] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.164 [2024-07-15 10:07:39.515713] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc3ba60) 00:26:26.164 [2024-07-15 10:07:39.515718] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.164 [2024-07-15 10:07:39.515738] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7ecc0, cid 3, qid 0 00:26:26.164 [2024-07-15 10:07:39.515794] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.164 [2024-07-15 10:07:39.515798] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.164 [2024-07-15 10:07:39.515801] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.164 [2024-07-15 10:07:39.515803] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7ecc0) on tqpair=0xc3ba60 00:26:26.164 [2024-07-15 10:07:39.515808] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:26:26.164 00:26:26.164 10:07:39 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:26:26.164 [2024-07-15 10:07:39.561654] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:26:26.164 [2024-07-15 10:07:39.561736] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86752 ] 00:26:26.164 [2024-07-15 10:07:39.690691] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:26:26.164 [2024-07-15 10:07:39.690756] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:26:26.164 [2024-07-15 10:07:39.690760] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:26:26.164 [2024-07-15 10:07:39.690773] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:26:26.164 [2024-07-15 10:07:39.690779] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:26:26.164 [2024-07-15 10:07:39.690903] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:26:26.164 [2024-07-15 10:07:39.690938] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1f6aa60 0 00:26:26.164 [2024-07-15 10:07:39.698672] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:26:26.164 [2024-07-15 10:07:39.698687] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:26:26.164 [2024-07-15 10:07:39.698691] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:26:26.164 [2024-07-15 10:07:39.698693] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:26:26.164 [2024-07-15 10:07:39.698728] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.164 [2024-07-15 10:07:39.698732] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.164 [2024-07-15 10:07:39.698735] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f6aa60) 00:26:26.164 [2024-07-15 10:07:39.698746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:26:26.164 [2024-07-15 10:07:39.698767] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fad840, cid 0, qid 0 00:26:26.164 [2024-07-15 10:07:39.706675] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.164 [2024-07-15 10:07:39.706688] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.165 [2024-07-15 10:07:39.706691] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.165 [2024-07-15 10:07:39.706694] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fad840) on tqpair=0x1f6aa60 00:26:26.165 [2024-07-15 10:07:39.706703] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:26:26.165 [2024-07-15 10:07:39.706709] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:26:26.165 [2024-07-15 10:07:39.706712] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:26:26.165 [2024-07-15 10:07:39.706730] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.165 [2024-07-15 10:07:39.706733] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.165 [2024-07-15 10:07:39.706735] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f6aa60) 00:26:26.165 [2024-07-15 10:07:39.706742] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.165 [2024-07-15 10:07:39.706765] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fad840, cid 0, qid 0 00:26:26.165 [2024-07-15 10:07:39.706822] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.165 [2024-07-15 10:07:39.706827] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.165 [2024-07-15 10:07:39.706830] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.165 [2024-07-15 10:07:39.706832] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fad840) on tqpair=0x1f6aa60 00:26:26.165 [2024-07-15 10:07:39.706836] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:26:26.165 [2024-07-15 10:07:39.706840] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:26:26.165 [2024-07-15 10:07:39.706845] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.165 [2024-07-15 10:07:39.706847] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.165 [2024-07-15 10:07:39.706849] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f6aa60) 00:26:26.165 [2024-07-15 10:07:39.706853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.165 [2024-07-15 10:07:39.706864] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fad840, cid 0, qid 0 00:26:26.165 [2024-07-15 10:07:39.706903] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.165 [2024-07-15 10:07:39.706907] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.165 [2024-07-15 10:07:39.706909] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.165 [2024-07-15 10:07:39.706911] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fad840) on tqpair=0x1f6aa60 00:26:26.165 [2024-07-15 10:07:39.706915] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:26:26.165 [2024-07-15 10:07:39.706920] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:26:26.165 [2024-07-15 10:07:39.706924] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.165 [2024-07-15 10:07:39.706927] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.165 [2024-07-15 10:07:39.706929] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f6aa60) 00:26:26.165 [2024-07-15 10:07:39.706933] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.165 [2024-07-15 10:07:39.706943] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fad840, cid 0, qid 0 00:26:26.165 [2024-07-15 10:07:39.706980] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.165 [2024-07-15 10:07:39.706985] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.165 [2024-07-15 10:07:39.706987] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.165 [2024-07-15 10:07:39.706989] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fad840) on tqpair=0x1f6aa60 00:26:26.165 [2024-07-15 10:07:39.706992] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:26:26.165 [2024-07-15 10:07:39.706998] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.165 [2024-07-15 10:07:39.707001] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.165 [2024-07-15 10:07:39.707003] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f6aa60) 00:26:26.165 [2024-07-15 10:07:39.707023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.165 [2024-07-15 10:07:39.707034] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fad840, cid 0, qid 0 00:26:26.165 [2024-07-15 10:07:39.707078] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.165 [2024-07-15 10:07:39.707083] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.165 [2024-07-15 10:07:39.707085] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.165 [2024-07-15 10:07:39.707087] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fad840) on tqpair=0x1f6aa60 00:26:26.165 [2024-07-15 10:07:39.707090] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:26:26.165 [2024-07-15 10:07:39.707093] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:26:26.165 [2024-07-15 10:07:39.707098] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:26:26.165 [2024-07-15 10:07:39.707201] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:26:26.165 [2024-07-15 10:07:39.707220] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:26:26.165 [2024-07-15 10:07:39.707227] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.165 [2024-07-15 10:07:39.707229] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.165 [2024-07-15 10:07:39.707231] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f6aa60) 00:26:26.165 [2024-07-15 10:07:39.707236] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.165 [2024-07-15 10:07:39.707248] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fad840, cid 0, qid 0 00:26:26.165 [2024-07-15 10:07:39.707289] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.165 [2024-07-15 10:07:39.707293] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.165 [2024-07-15 10:07:39.707296] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.165 [2024-07-15 10:07:39.707298] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fad840) on tqpair=0x1f6aa60 00:26:26.165 [2024-07-15 10:07:39.707301] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:26:26.165 [2024-07-15 10:07:39.707307] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.165 [2024-07-15 10:07:39.707310] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.165 [2024-07-15 10:07:39.707312] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f6aa60) 00:26:26.165 [2024-07-15 10:07:39.707317] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.165 [2024-07-15 10:07:39.707327] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fad840, cid 0, qid 0 00:26:26.165 [2024-07-15 10:07:39.707374] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.165 [2024-07-15 10:07:39.707379] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.165 [2024-07-15 10:07:39.707381] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.165 [2024-07-15 10:07:39.707384] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fad840) on tqpair=0x1f6aa60 00:26:26.165 [2024-07-15 10:07:39.707387] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:26:26.165 [2024-07-15 10:07:39.707390] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:26:26.165 [2024-07-15 10:07:39.707395] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:26:26.165 [2024-07-15 10:07:39.707401] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:26:26.165 [2024-07-15 10:07:39.707409] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.165 [2024-07-15 10:07:39.707411] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f6aa60) 00:26:26.165 [2024-07-15 10:07:39.707416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.165 [2024-07-15 10:07:39.707427] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fad840, cid 0, qid 0 00:26:26.165 [2024-07-15 10:07:39.707519] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:26.165 [2024-07-15 10:07:39.707523] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:26.165 [2024-07-15 10:07:39.707525] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:26.165 [2024-07-15 10:07:39.707528] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f6aa60): datao=0, datal=4096, cccid=0 00:26:26.165 [2024-07-15 10:07:39.707531] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fad840) on tqpair(0x1f6aa60): expected_datao=0, payload_size=4096 00:26:26.165 [2024-07-15 10:07:39.707534] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.165 [2024-07-15 10:07:39.707540] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:26.165 [2024-07-15 10:07:39.707543] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:26.165 [2024-07-15 10:07:39.707549] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.165 [2024-07-15 10:07:39.707553] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.165 [2024-07-15 10:07:39.707555] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.165 [2024-07-15 10:07:39.707558] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fad840) on tqpair=0x1f6aa60 00:26:26.165 [2024-07-15 10:07:39.707564] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:26:26.165 [2024-07-15 10:07:39.707567] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:26:26.165 [2024-07-15 10:07:39.707570] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:26:26.165 [2024-07-15 10:07:39.707573] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:26:26.165 [2024-07-15 10:07:39.707575] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:26:26.165 [2024-07-15 10:07:39.707578] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:26:26.165 [2024-07-15 10:07:39.707583] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:26:26.165 [2024-07-15 10:07:39.707588] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.165 [2024-07-15 10:07:39.707590] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.165 [2024-07-15 10:07:39.707592] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f6aa60) 00:26:26.165 [2024-07-15 10:07:39.707597] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:26.165 [2024-07-15 10:07:39.707609] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fad840, cid 0, qid 0 00:26:26.165 [2024-07-15 10:07:39.707681] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.165 [2024-07-15 10:07:39.707686] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.165 [2024-07-15 10:07:39.707688] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.165 [2024-07-15 10:07:39.707690] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fad840) on tqpair=0x1f6aa60 00:26:26.165 [2024-07-15 10:07:39.707696] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.165 [2024-07-15 10:07:39.707698] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.166 [2024-07-15 10:07:39.707700] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f6aa60) 00:26:26.166 [2024-07-15 10:07:39.707704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.166 [2024-07-15 10:07:39.707708] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.166 [2024-07-15 10:07:39.707711] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.166 [2024-07-15 10:07:39.707713] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1f6aa60) 00:26:26.166 [2024-07-15 10:07:39.707717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.166 [2024-07-15 10:07:39.707721] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.166 [2024-07-15 10:07:39.707723] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.166 [2024-07-15 10:07:39.707725] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1f6aa60) 00:26:26.166 [2024-07-15 10:07:39.707729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.166 [2024-07-15 10:07:39.707733] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.166 [2024-07-15 10:07:39.707735] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.166 [2024-07-15 10:07:39.707738] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f6aa60) 00:26:26.166 [2024-07-15 10:07:39.707741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.166 [2024-07-15 10:07:39.707744] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:26:26.166 [2024-07-15 10:07:39.707752] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:26:26.166 [2024-07-15 10:07:39.707757] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.166 [2024-07-15 10:07:39.707759] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f6aa60) 00:26:26.166 [2024-07-15 10:07:39.707764] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.166 [2024-07-15 10:07:39.707777] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fad840, cid 0, qid 0 00:26:26.166 [2024-07-15 10:07:39.707781] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fad9c0, cid 1, qid 0 00:26:26.166 [2024-07-15 10:07:39.707784] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fadb40, cid 2, qid 0 00:26:26.166 [2024-07-15 10:07:39.707787] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fadcc0, cid 3, qid 0 00:26:26.166 [2024-07-15 10:07:39.707790] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fade40, cid 4, qid 0 00:26:26.166 [2024-07-15 10:07:39.707884] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.166 [2024-07-15 10:07:39.707893] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.166 [2024-07-15 10:07:39.707896] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.166 [2024-07-15 10:07:39.707899] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fade40) on tqpair=0x1f6aa60 00:26:26.166 [2024-07-15 10:07:39.707902] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:26:26.166 [2024-07-15 10:07:39.707908] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:26:26.166 [2024-07-15 10:07:39.707914] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:26:26.166 [2024-07-15 10:07:39.707918] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:26:26.166 [2024-07-15 10:07:39.707923] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.166 [2024-07-15 10:07:39.707925] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.166 [2024-07-15 10:07:39.707928] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f6aa60) 00:26:26.166 [2024-07-15 10:07:39.707932] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:26.166 [2024-07-15 10:07:39.707944] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fade40, cid 4, qid 0 00:26:26.166 [2024-07-15 10:07:39.707991] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.166 [2024-07-15 10:07:39.707996] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.166 [2024-07-15 10:07:39.707998] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.166 [2024-07-15 10:07:39.708000] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fade40) on tqpair=0x1f6aa60 00:26:26.166 [2024-07-15 10:07:39.708049] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:26:26.166 [2024-07-15 10:07:39.708055] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:26:26.166 [2024-07-15 10:07:39.708061] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.166 [2024-07-15 10:07:39.708063] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f6aa60) 00:26:26.166 [2024-07-15 10:07:39.708068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.166 [2024-07-15 10:07:39.708079] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fade40, cid 4, qid 0 00:26:26.166 [2024-07-15 10:07:39.708131] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:26.166 [2024-07-15 10:07:39.708135] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:26.166 [2024-07-15 10:07:39.708138] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:26.166 [2024-07-15 10:07:39.708140] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f6aa60): datao=0, datal=4096, cccid=4 00:26:26.166 [2024-07-15 10:07:39.708144] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fade40) on tqpair(0x1f6aa60): expected_datao=0, payload_size=4096 00:26:26.166 [2024-07-15 10:07:39.708146] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.166 [2024-07-15 10:07:39.708152] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:26.166 [2024-07-15 10:07:39.708154] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:26.166 [2024-07-15 10:07:39.708160] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.166 [2024-07-15 10:07:39.708164] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.166 [2024-07-15 10:07:39.708166] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.166 [2024-07-15 10:07:39.708169] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fade40) on tqpair=0x1f6aa60 00:26:26.166 [2024-07-15 10:07:39.708178] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:26:26.166 [2024-07-15 10:07:39.708185] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:26:26.166 [2024-07-15 10:07:39.708191] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:26:26.166 [2024-07-15 10:07:39.708196] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.166 [2024-07-15 10:07:39.708198] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f6aa60) 00:26:26.166 [2024-07-15 10:07:39.708202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.166 [2024-07-15 10:07:39.708213] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fade40, cid 4, qid 0 00:26:26.166 [2024-07-15 10:07:39.708279] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:26.166 [2024-07-15 10:07:39.708283] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:26.166 [2024-07-15 10:07:39.708286] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:26.166 [2024-07-15 10:07:39.708288] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f6aa60): datao=0, datal=4096, cccid=4 00:26:26.166 [2024-07-15 10:07:39.708291] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fade40) on tqpair(0x1f6aa60): expected_datao=0, payload_size=4096 00:26:26.166 [2024-07-15 10:07:39.708293] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.166 [2024-07-15 10:07:39.708298] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:26.166 [2024-07-15 10:07:39.708300] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:26.166 [2024-07-15 10:07:39.708305] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.166 [2024-07-15 10:07:39.708310] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.166 [2024-07-15 10:07:39.708312] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.166 [2024-07-15 10:07:39.708314] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fade40) on tqpair=0x1f6aa60 00:26:26.166 [2024-07-15 10:07:39.708324] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:26:26.166 [2024-07-15 10:07:39.708330] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:26:26.166 [2024-07-15 10:07:39.708342] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.166 [2024-07-15 10:07:39.708345] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f6aa60) 00:26:26.166 [2024-07-15 10:07:39.708349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.166 [2024-07-15 10:07:39.708361] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fade40, cid 4, qid 0 00:26:26.166 [2024-07-15 10:07:39.708413] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:26.166 [2024-07-15 10:07:39.708417] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:26.166 [2024-07-15 10:07:39.708420] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:26.166 [2024-07-15 10:07:39.708422] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f6aa60): datao=0, datal=4096, cccid=4 00:26:26.166 [2024-07-15 10:07:39.708425] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fade40) on tqpair(0x1f6aa60): expected_datao=0, payload_size=4096 00:26:26.166 [2024-07-15 10:07:39.708427] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.166 [2024-07-15 10:07:39.708432] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:26.166 [2024-07-15 10:07:39.708434] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:26.166 [2024-07-15 10:07:39.708440] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.166 [2024-07-15 10:07:39.708444] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.166 [2024-07-15 10:07:39.708446] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.166 [2024-07-15 10:07:39.708448] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fade40) on tqpair=0x1f6aa60 00:26:26.166 [2024-07-15 10:07:39.708453] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:26:26.166 [2024-07-15 10:07:39.708458] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:26:26.166 [2024-07-15 10:07:39.708465] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:26:26.166 [2024-07-15 10:07:39.708469] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:26:26.166 [2024-07-15 10:07:39.708472] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:26:26.166 [2024-07-15 10:07:39.708476] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:26:26.166 [2024-07-15 10:07:39.708479] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:26:26.166 [2024-07-15 10:07:39.708482] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:26:26.166 [2024-07-15 10:07:39.708485] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:26:26.166 [2024-07-15 10:07:39.708499] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.167 [2024-07-15 10:07:39.708502] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f6aa60) 00:26:26.167 [2024-07-15 10:07:39.708507] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.167 [2024-07-15 10:07:39.708512] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.167 [2024-07-15 10:07:39.708514] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.167 [2024-07-15 10:07:39.708516] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f6aa60) 00:26:26.167 [2024-07-15 10:07:39.708520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.167 [2024-07-15 10:07:39.708536] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fade40, cid 4, qid 0 00:26:26.167 [2024-07-15 10:07:39.708540] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fadfc0, cid 5, qid 0 00:26:26.167 [2024-07-15 10:07:39.708606] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.167 [2024-07-15 10:07:39.708611] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.167 [2024-07-15 10:07:39.708613] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.167 [2024-07-15 10:07:39.708615] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fade40) on tqpair=0x1f6aa60 00:26:26.167 [2024-07-15 10:07:39.708620] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.167 [2024-07-15 10:07:39.708624] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.167 [2024-07-15 10:07:39.708626] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.167 [2024-07-15 10:07:39.708628] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fadfc0) on tqpair=0x1f6aa60 00:26:26.167 [2024-07-15 10:07:39.708634] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.167 [2024-07-15 10:07:39.708637] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f6aa60) 00:26:26.167 [2024-07-15 10:07:39.708642] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.167 [2024-07-15 10:07:39.708652] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fadfc0, cid 5, qid 0 00:26:26.167 [2024-07-15 10:07:39.708712] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.167 [2024-07-15 10:07:39.708717] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.167 [2024-07-15 10:07:39.708719] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.167 [2024-07-15 10:07:39.708722] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fadfc0) on tqpair=0x1f6aa60 00:26:26.167 [2024-07-15 10:07:39.708728] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.167 [2024-07-15 10:07:39.708730] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f6aa60) 00:26:26.167 [2024-07-15 10:07:39.708735] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.167 [2024-07-15 10:07:39.708745] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fadfc0, cid 5, qid 0 00:26:26.167 [2024-07-15 10:07:39.708798] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.167 [2024-07-15 10:07:39.708803] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.167 [2024-07-15 10:07:39.708805] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.167 [2024-07-15 10:07:39.708807] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fadfc0) on tqpair=0x1f6aa60 00:26:26.167 [2024-07-15 10:07:39.708813] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.167 [2024-07-15 10:07:39.708816] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f6aa60) 00:26:26.167 [2024-07-15 10:07:39.708820] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.167 [2024-07-15 10:07:39.708831] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fadfc0, cid 5, qid 0 00:26:26.167 [2024-07-15 10:07:39.708866] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.167 [2024-07-15 10:07:39.708871] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.167 [2024-07-15 10:07:39.708873] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.167 [2024-07-15 10:07:39.708875] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fadfc0) on tqpair=0x1f6aa60 00:26:26.167 [2024-07-15 10:07:39.708887] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.167 [2024-07-15 10:07:39.708890] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f6aa60) 00:26:26.167 [2024-07-15 10:07:39.708894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.167 [2024-07-15 10:07:39.708899] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.167 [2024-07-15 10:07:39.708902] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f6aa60) 00:26:26.167 [2024-07-15 10:07:39.708906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.167 [2024-07-15 10:07:39.708911] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.167 [2024-07-15 10:07:39.708914] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1f6aa60) 00:26:26.167 [2024-07-15 10:07:39.708918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.167 [2024-07-15 10:07:39.708925] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.167 [2024-07-15 10:07:39.708928] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1f6aa60) 00:26:26.167 [2024-07-15 10:07:39.708932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.167 [2024-07-15 10:07:39.708944] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fadfc0, cid 5, qid 0 00:26:26.167 [2024-07-15 10:07:39.708948] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fade40, cid 4, qid 0 00:26:26.167 [2024-07-15 10:07:39.708951] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fae140, cid 6, qid 0 00:26:26.167 [2024-07-15 10:07:39.708954] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fae2c0, cid 7, qid 0 00:26:26.167 [2024-07-15 10:07:39.709092] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:26.167 [2024-07-15 10:07:39.709103] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:26.167 [2024-07-15 10:07:39.709106] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:26.167 [2024-07-15 10:07:39.709108] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f6aa60): datao=0, datal=8192, cccid=5 00:26:26.167 [2024-07-15 10:07:39.709111] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fadfc0) on tqpair(0x1f6aa60): expected_datao=0, payload_size=8192 00:26:26.167 [2024-07-15 10:07:39.709114] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.167 [2024-07-15 10:07:39.709126] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:26.167 [2024-07-15 10:07:39.709129] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:26.167 [2024-07-15 10:07:39.709133] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:26.167 [2024-07-15 10:07:39.709137] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:26.167 [2024-07-15 10:07:39.709139] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:26.167 [2024-07-15 10:07:39.709141] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f6aa60): datao=0, datal=512, cccid=4 00:26:26.167 [2024-07-15 10:07:39.709144] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fade40) on tqpair(0x1f6aa60): expected_datao=0, payload_size=512 00:26:26.167 [2024-07-15 10:07:39.709147] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.167 [2024-07-15 10:07:39.709151] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:26.167 [2024-07-15 10:07:39.709154] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:26.167 [2024-07-15 10:07:39.709158] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:26.167 [2024-07-15 10:07:39.709161] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:26.167 [2024-07-15 10:07:39.709163] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:26.167 [2024-07-15 10:07:39.709165] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f6aa60): datao=0, datal=512, cccid=6 00:26:26.167 [2024-07-15 10:07:39.709168] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fae140) on tqpair(0x1f6aa60): expected_datao=0, payload_size=512 00:26:26.167 [2024-07-15 10:07:39.709171] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.167 [2024-07-15 10:07:39.709175] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:26.167 [2024-07-15 10:07:39.709177] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:26.167 [2024-07-15 10:07:39.709181] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:26.167 [2024-07-15 10:07:39.709185] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:26.167 [2024-07-15 10:07:39.709187] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:26.167 [2024-07-15 10:07:39.709189] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f6aa60): datao=0, datal=4096, cccid=7 00:26:26.167 [2024-07-15 10:07:39.709192] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fae2c0) on tqpair(0x1f6aa60): expected_datao=0, payload_size=4096 00:26:26.167 [2024-07-15 10:07:39.709194] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.167 [2024-07-15 10:07:39.709199] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:26.167 [2024-07-15 10:07:39.709201] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:26.167 [2024-07-15 10:07:39.709207] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.167 [2024-07-15 10:07:39.709210] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.167 [2024-07-15 10:07:39.709213] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.167 [2024-07-15 10:07:39.709215] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fadfc0) on tqpair=0x1f6aa60 00:26:26.167 [2024-07-15 10:07:39.709227] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.167 [2024-07-15 10:07:39.709232] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.167 [2024-07-15 10:07:39.709234] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.167 [2024-07-15 10:07:39.709236] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fade40) on tqpair=0x1f6aa60 00:26:26.167 [2024-07-15 10:07:39.709245] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.167 [2024-07-15 10:07:39.709249] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.167 [2024-07-15 10:07:39.709251] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.167 [2024-07-15 10:07:39.709253] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fae140) on tqpair=0x1f6aa60 00:26:26.167 [2024-07-15 10:07:39.709258] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.167 ===================================================== 00:26:26.167 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:26.167 ===================================================== 00:26:26.167 Controller Capabilities/Features 00:26:26.167 ================================ 00:26:26.167 Vendor ID: 8086 00:26:26.167 Subsystem Vendor ID: 8086 00:26:26.167 Serial Number: SPDK00000000000001 00:26:26.167 Model Number: SPDK bdev Controller 00:26:26.167 Firmware Version: 24.09 00:26:26.167 Recommended Arb Burst: 6 00:26:26.167 IEEE OUI Identifier: e4 d2 5c 00:26:26.167 Multi-path I/O 00:26:26.167 May have multiple subsystem ports: Yes 00:26:26.167 May have multiple controllers: Yes 00:26:26.167 Associated with SR-IOV VF: No 00:26:26.167 Max Data Transfer Size: 131072 00:26:26.167 Max Number of Namespaces: 32 00:26:26.167 Max Number of I/O Queues: 127 00:26:26.167 NVMe Specification Version (VS): 1.3 00:26:26.167 NVMe Specification Version (Identify): 1.3 00:26:26.167 Maximum Queue Entries: 128 00:26:26.168 Contiguous Queues Required: Yes 00:26:26.168 Arbitration Mechanisms Supported 00:26:26.168 Weighted Round Robin: Not Supported 00:26:26.168 Vendor Specific: Not Supported 00:26:26.168 Reset Timeout: 15000 ms 00:26:26.168 Doorbell Stride: 4 bytes 00:26:26.168 NVM Subsystem Reset: Not Supported 00:26:26.168 Command Sets Supported 00:26:26.168 NVM Command Set: Supported 00:26:26.168 Boot Partition: Not Supported 00:26:26.168 Memory Page Size Minimum: 4096 bytes 00:26:26.168 Memory Page Size Maximum: 4096 bytes 00:26:26.168 Persistent Memory Region: Not Supported 00:26:26.168 Optional Asynchronous Events Supported 00:26:26.168 Namespace Attribute Notices: Supported 00:26:26.168 Firmware Activation Notices: Not Supported 00:26:26.168 ANA Change Notices: Not Supported 00:26:26.168 PLE Aggregate Log Change Notices: Not Supported 00:26:26.168 LBA Status Info Alert Notices: Not Supported 00:26:26.168 EGE Aggregate Log Change Notices: Not Supported 00:26:26.168 Normal NVM Subsystem Shutdown event: Not Supported 00:26:26.168 Zone Descriptor Change Notices: Not Supported 00:26:26.168 Discovery Log Change Notices: Not Supported 00:26:26.168 Controller Attributes 00:26:26.168 128-bit Host Identifier: Supported 00:26:26.168 Non-Operational Permissive Mode: Not Supported 00:26:26.168 NVM Sets: Not Supported 00:26:26.168 Read Recovery Levels: Not Supported 00:26:26.168 Endurance Groups: Not Supported 00:26:26.168 Predictable Latency Mode: Not Supported 00:26:26.168 Traffic Based Keep ALive: Not Supported 00:26:26.168 Namespace Granularity: Not Supported 00:26:26.168 SQ Associations: Not Supported 00:26:26.168 UUID List: Not Supported 00:26:26.168 Multi-Domain Subsystem: Not Supported 00:26:26.168 Fixed Capacity Management: Not Supported 00:26:26.168 Variable Capacity Management: Not Supported 00:26:26.168 Delete Endurance Group: Not Supported 00:26:26.168 Delete NVM Set: Not Supported 00:26:26.168 Extended LBA Formats Supported: Not Supported 00:26:26.168 Flexible Data Placement Supported: Not Supported 00:26:26.168 00:26:26.168 Controller Memory Buffer Support 00:26:26.168 ================================ 00:26:26.168 Supported: No 00:26:26.168 00:26:26.168 Persistent Memory Region Support 00:26:26.168 ================================ 00:26:26.168 Supported: No 00:26:26.168 00:26:26.168 Admin Command Set Attributes 00:26:26.168 ============================ 00:26:26.168 Security Send/Receive: Not Supported 00:26:26.168 Format NVM: Not Supported 00:26:26.168 Firmware Activate/Download: Not Supported 00:26:26.168 Namespace Management: Not Supported 00:26:26.168 Device Self-Test: Not Supported 00:26:26.168 Directives: Not Supported 00:26:26.168 NVMe-MI: Not Supported 00:26:26.168 Virtualization Management: Not Supported 00:26:26.168 Doorbell Buffer Config: Not Supported 00:26:26.168 Get LBA Status Capability: Not Supported 00:26:26.168 Command & Feature Lockdown Capability: Not Supported 00:26:26.168 Abort Command Limit: 4 00:26:26.168 Async Event Request Limit: 4 00:26:26.168 Number of Firmware Slots: N/A 00:26:26.168 Firmware Slot 1 Read-Only: N/A 00:26:26.168 Firmware Activation Without Reset: [2024-07-15 10:07:39.709262] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.168 [2024-07-15 10:07:39.709264] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.168 [2024-07-15 10:07:39.709267] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fae2c0) on tqpair=0x1f6aa60 00:26:26.168 N/A 00:26:26.168 Multiple Update Detection Support: N/A 00:26:26.168 Firmware Update Granularity: No Information Provided 00:26:26.168 Per-Namespace SMART Log: No 00:26:26.168 Asymmetric Namespace Access Log Page: Not Supported 00:26:26.168 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:26:26.168 Command Effects Log Page: Supported 00:26:26.168 Get Log Page Extended Data: Supported 00:26:26.168 Telemetry Log Pages: Not Supported 00:26:26.168 Persistent Event Log Pages: Not Supported 00:26:26.168 Supported Log Pages Log Page: May Support 00:26:26.168 Commands Supported & Effects Log Page: Not Supported 00:26:26.168 Feature Identifiers & Effects Log Page:May Support 00:26:26.168 NVMe-MI Commands & Effects Log Page: May Support 00:26:26.168 Data Area 4 for Telemetry Log: Not Supported 00:26:26.168 Error Log Page Entries Supported: 128 00:26:26.168 Keep Alive: Supported 00:26:26.168 Keep Alive Granularity: 10000 ms 00:26:26.168 00:26:26.168 NVM Command Set Attributes 00:26:26.168 ========================== 00:26:26.168 Submission Queue Entry Size 00:26:26.168 Max: 64 00:26:26.168 Min: 64 00:26:26.168 Completion Queue Entry Size 00:26:26.168 Max: 16 00:26:26.168 Min: 16 00:26:26.168 Number of Namespaces: 32 00:26:26.168 Compare Command: Supported 00:26:26.168 Write Uncorrectable Command: Not Supported 00:26:26.168 Dataset Management Command: Supported 00:26:26.168 Write Zeroes Command: Supported 00:26:26.168 Set Features Save Field: Not Supported 00:26:26.168 Reservations: Supported 00:26:26.168 Timestamp: Not Supported 00:26:26.168 Copy: Supported 00:26:26.168 Volatile Write Cache: Present 00:26:26.168 Atomic Write Unit (Normal): 1 00:26:26.168 Atomic Write Unit (PFail): 1 00:26:26.168 Atomic Compare & Write Unit: 1 00:26:26.168 Fused Compare & Write: Supported 00:26:26.168 Scatter-Gather List 00:26:26.168 SGL Command Set: Supported 00:26:26.168 SGL Keyed: Supported 00:26:26.168 SGL Bit Bucket Descriptor: Not Supported 00:26:26.168 SGL Metadata Pointer: Not Supported 00:26:26.168 Oversized SGL: Not Supported 00:26:26.168 SGL Metadata Address: Not Supported 00:26:26.168 SGL Offset: Supported 00:26:26.168 Transport SGL Data Block: Not Supported 00:26:26.168 Replay Protected Memory Block: Not Supported 00:26:26.168 00:26:26.168 Firmware Slot Information 00:26:26.168 ========================= 00:26:26.168 Active slot: 1 00:26:26.168 Slot 1 Firmware Revision: 24.09 00:26:26.168 00:26:26.168 00:26:26.168 Commands Supported and Effects 00:26:26.168 ============================== 00:26:26.168 Admin Commands 00:26:26.168 -------------- 00:26:26.168 Get Log Page (02h): Supported 00:26:26.168 Identify (06h): Supported 00:26:26.168 Abort (08h): Supported 00:26:26.168 Set Features (09h): Supported 00:26:26.168 Get Features (0Ah): Supported 00:26:26.168 Asynchronous Event Request (0Ch): Supported 00:26:26.168 Keep Alive (18h): Supported 00:26:26.168 I/O Commands 00:26:26.168 ------------ 00:26:26.168 Flush (00h): Supported LBA-Change 00:26:26.168 Write (01h): Supported LBA-Change 00:26:26.168 Read (02h): Supported 00:26:26.168 Compare (05h): Supported 00:26:26.168 Write Zeroes (08h): Supported LBA-Change 00:26:26.168 Dataset Management (09h): Supported LBA-Change 00:26:26.168 Copy (19h): Supported LBA-Change 00:26:26.168 00:26:26.168 Error Log 00:26:26.168 ========= 00:26:26.168 00:26:26.168 Arbitration 00:26:26.168 =========== 00:26:26.168 Arbitration Burst: 1 00:26:26.168 00:26:26.168 Power Management 00:26:26.168 ================ 00:26:26.168 Number of Power States: 1 00:26:26.168 Current Power State: Power State #0 00:26:26.168 Power State #0: 00:26:26.168 Max Power: 0.00 W 00:26:26.168 Non-Operational State: Operational 00:26:26.168 Entry Latency: Not Reported 00:26:26.168 Exit Latency: Not Reported 00:26:26.168 Relative Read Throughput: 0 00:26:26.168 Relative Read Latency: 0 00:26:26.168 Relative Write Throughput: 0 00:26:26.168 Relative Write Latency: 0 00:26:26.168 Idle Power: Not Reported 00:26:26.168 Active Power: Not Reported 00:26:26.168 Non-Operational Permissive Mode: Not Supported 00:26:26.168 00:26:26.168 Health Information 00:26:26.168 ================== 00:26:26.168 Critical Warnings: 00:26:26.168 Available Spare Space: OK 00:26:26.168 Temperature: OK 00:26:26.168 Device Reliability: OK 00:26:26.168 Read Only: No 00:26:26.168 Volatile Memory Backup: OK 00:26:26.168 Current Temperature: 0 Kelvin (-273 Celsius) 00:26:26.168 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:26:26.169 Available Spare: 0% 00:26:26.169 Available Spare Threshold: 0% 00:26:26.169 Life Percentage Used:[2024-07-15 10:07:39.709353] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.169 [2024-07-15 10:07:39.709356] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1f6aa60) 00:26:26.169 [2024-07-15 10:07:39.709361] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.169 [2024-07-15 10:07:39.709375] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fae2c0, cid 7, qid 0 00:26:26.169 [2024-07-15 10:07:39.709428] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.169 [2024-07-15 10:07:39.709432] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.169 [2024-07-15 10:07:39.709435] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.169 [2024-07-15 10:07:39.709437] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fae2c0) on tqpair=0x1f6aa60 00:26:26.169 [2024-07-15 10:07:39.709465] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:26:26.169 [2024-07-15 10:07:39.709472] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fad840) on tqpair=0x1f6aa60 00:26:26.169 [2024-07-15 10:07:39.709477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.169 [2024-07-15 10:07:39.709480] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fad9c0) on tqpair=0x1f6aa60 00:26:26.169 [2024-07-15 10:07:39.709483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.169 [2024-07-15 10:07:39.709486] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fadb40) on tqpair=0x1f6aa60 00:26:26.169 [2024-07-15 10:07:39.709489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.169 [2024-07-15 10:07:39.709492] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fadcc0) on tqpair=0x1f6aa60 00:26:26.169 [2024-07-15 10:07:39.709495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.169 [2024-07-15 10:07:39.709500] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.169 [2024-07-15 10:07:39.709503] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.169 [2024-07-15 10:07:39.709505] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f6aa60) 00:26:26.169 [2024-07-15 10:07:39.709510] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.169 [2024-07-15 10:07:39.709522] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fadcc0, cid 3, qid 0 00:26:26.169 [2024-07-15 10:07:39.709568] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.169 [2024-07-15 10:07:39.709573] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.169 [2024-07-15 10:07:39.709575] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.169 [2024-07-15 10:07:39.709577] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fadcc0) on tqpair=0x1f6aa60 00:26:26.169 [2024-07-15 10:07:39.709582] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.169 [2024-07-15 10:07:39.709584] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.169 [2024-07-15 10:07:39.709586] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f6aa60) 00:26:26.169 [2024-07-15 10:07:39.709591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.169 [2024-07-15 10:07:39.709603] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fadcc0, cid 3, qid 0 00:26:26.169 [2024-07-15 10:07:39.709674] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.169 [2024-07-15 10:07:39.709679] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.169 [2024-07-15 10:07:39.709682] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.169 [2024-07-15 10:07:39.709684] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fadcc0) on tqpair=0x1f6aa60 00:26:26.169 [2024-07-15 10:07:39.709687] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:26:26.169 [2024-07-15 10:07:39.709690] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:26:26.169 [2024-07-15 10:07:39.709697] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.169 [2024-07-15 10:07:39.709699] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.169 [2024-07-15 10:07:39.709702] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f6aa60) 00:26:26.169 [2024-07-15 10:07:39.709706] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.169 [2024-07-15 10:07:39.709718] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fadcc0, cid 3, qid 0 00:26:26.169 [2024-07-15 10:07:39.709758] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.169 [2024-07-15 10:07:39.709762] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.169 [2024-07-15 10:07:39.709764] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.169 [2024-07-15 10:07:39.709767] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fadcc0) on tqpair=0x1f6aa60 00:26:26.169 [2024-07-15 10:07:39.709774] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.169 [2024-07-15 10:07:39.709777] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.169 [2024-07-15 10:07:39.709779] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f6aa60) 00:26:26.169 [2024-07-15 10:07:39.709783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.169 [2024-07-15 10:07:39.709794] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fadcc0, cid 3, qid 0 00:26:26.169 [2024-07-15 10:07:39.709836] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.169 [2024-07-15 10:07:39.709840] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.169 [2024-07-15 10:07:39.709842] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.169 [2024-07-15 10:07:39.709844] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fadcc0) on tqpair=0x1f6aa60 00:26:26.169 [2024-07-15 10:07:39.709851] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.169 [2024-07-15 10:07:39.709854] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.169 [2024-07-15 10:07:39.709856] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f6aa60) 00:26:26.169 [2024-07-15 10:07:39.709860] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.169 [2024-07-15 10:07:39.709871] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fadcc0, cid 3, qid 0 00:26:26.169 [2024-07-15 10:07:39.709912] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.169 [2024-07-15 10:07:39.709917] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.169 [2024-07-15 10:07:39.709919] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.169 [2024-07-15 10:07:39.709921] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fadcc0) on tqpair=0x1f6aa60 00:26:26.169 [2024-07-15 10:07:39.709928] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.169 [2024-07-15 10:07:39.709931] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.169 [2024-07-15 10:07:39.709933] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f6aa60) 00:26:26.169 [2024-07-15 10:07:39.709937] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.169 [2024-07-15 10:07:39.709948] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fadcc0, cid 3, qid 0 00:26:26.169 [2024-07-15 10:07:39.709985] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.169 [2024-07-15 10:07:39.709990] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.169 [2024-07-15 10:07:39.709992] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.169 [2024-07-15 10:07:39.709994] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fadcc0) on tqpair=0x1f6aa60 00:26:26.169 [2024-07-15 10:07:39.710000] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.169 [2024-07-15 10:07:39.710003] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.169 [2024-07-15 10:07:39.710005] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f6aa60) 00:26:26.169 [2024-07-15 10:07:39.710010] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.169 [2024-07-15 10:07:39.710020] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fadcc0, cid 3, qid 0 00:26:26.169 [2024-07-15 10:07:39.710058] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.169 [2024-07-15 10:07:39.710062] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.169 [2024-07-15 10:07:39.710064] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.169 [2024-07-15 10:07:39.710067] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fadcc0) on tqpair=0x1f6aa60 00:26:26.169 [2024-07-15 10:07:39.710073] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.169 [2024-07-15 10:07:39.710076] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.169 [2024-07-15 10:07:39.710078] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f6aa60) 00:26:26.169 [2024-07-15 10:07:39.710082] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.169 [2024-07-15 10:07:39.710093] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fadcc0, cid 3, qid 0 00:26:26.169 [2024-07-15 10:07:39.710131] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.169 [2024-07-15 10:07:39.710136] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.169 [2024-07-15 10:07:39.710138] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.169 [2024-07-15 10:07:39.710140] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fadcc0) on tqpair=0x1f6aa60 00:26:26.169 [2024-07-15 10:07:39.710147] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.169 [2024-07-15 10:07:39.710149] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.169 [2024-07-15 10:07:39.710151] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f6aa60) 00:26:26.169 [2024-07-15 10:07:39.710156] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.169 [2024-07-15 10:07:39.710166] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fadcc0, cid 3, qid 0 00:26:26.169 [2024-07-15 10:07:39.710212] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.169 [2024-07-15 10:07:39.710217] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.169 [2024-07-15 10:07:39.710219] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.169 [2024-07-15 10:07:39.710221] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fadcc0) on tqpair=0x1f6aa60 00:26:26.169 [2024-07-15 10:07:39.710227] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.169 [2024-07-15 10:07:39.710230] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.169 [2024-07-15 10:07:39.710232] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f6aa60) 00:26:26.169 [2024-07-15 10:07:39.710237] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.169 [2024-07-15 10:07:39.710248] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fadcc0, cid 3, qid 0 00:26:26.169 [2024-07-15 10:07:39.710289] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.169 [2024-07-15 10:07:39.710294] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.169 [2024-07-15 10:07:39.710296] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.169 [2024-07-15 10:07:39.710298] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fadcc0) on tqpair=0x1f6aa60 00:26:26.169 [2024-07-15 10:07:39.710305] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.169 [2024-07-15 10:07:39.710308] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.170 [2024-07-15 10:07:39.710310] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f6aa60) 00:26:26.170 [2024-07-15 10:07:39.710314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.170 [2024-07-15 10:07:39.710325] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fadcc0, cid 3, qid 0 00:26:26.170 [2024-07-15 10:07:39.710370] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.170 [2024-07-15 10:07:39.710375] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.170 [2024-07-15 10:07:39.710377] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.170 [2024-07-15 10:07:39.710379] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fadcc0) on tqpair=0x1f6aa60 00:26:26.170 [2024-07-15 10:07:39.710386] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.170 [2024-07-15 10:07:39.710388] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.170 [2024-07-15 10:07:39.710391] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f6aa60) 00:26:26.170 [2024-07-15 10:07:39.710395] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.170 [2024-07-15 10:07:39.710406] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fadcc0, cid 3, qid 0 00:26:26.170 [2024-07-15 10:07:39.710446] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.170 [2024-07-15 10:07:39.710450] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.170 [2024-07-15 10:07:39.710452] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.170 [2024-07-15 10:07:39.710455] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fadcc0) on tqpair=0x1f6aa60 00:26:26.170 [2024-07-15 10:07:39.710461] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.170 [2024-07-15 10:07:39.710464] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.170 [2024-07-15 10:07:39.710466] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f6aa60) 00:26:26.170 [2024-07-15 10:07:39.710471] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.170 [2024-07-15 10:07:39.710481] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fadcc0, cid 3, qid 0 00:26:26.170 [2024-07-15 10:07:39.710523] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.170 [2024-07-15 10:07:39.710527] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.170 [2024-07-15 10:07:39.710529] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.170 [2024-07-15 10:07:39.710532] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fadcc0) on tqpair=0x1f6aa60 00:26:26.170 [2024-07-15 10:07:39.710538] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.170 [2024-07-15 10:07:39.710541] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.170 [2024-07-15 10:07:39.710543] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f6aa60) 00:26:26.170 [2024-07-15 10:07:39.710547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.170 [2024-07-15 10:07:39.710558] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fadcc0, cid 3, qid 0 00:26:26.170 [2024-07-15 10:07:39.710599] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.170 [2024-07-15 10:07:39.710604] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.170 [2024-07-15 10:07:39.710606] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.170 [2024-07-15 10:07:39.710608] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fadcc0) on tqpair=0x1f6aa60 00:26:26.170 [2024-07-15 10:07:39.710615] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.170 [2024-07-15 10:07:39.710617] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.170 [2024-07-15 10:07:39.710619] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f6aa60) 00:26:26.170 [2024-07-15 10:07:39.710624] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.170 [2024-07-15 10:07:39.710634] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fadcc0, cid 3, qid 0 00:26:26.170 [2024-07-15 10:07:39.714675] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.170 [2024-07-15 10:07:39.714688] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.170 [2024-07-15 10:07:39.714690] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.170 [2024-07-15 10:07:39.714693] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fadcc0) on tqpair=0x1f6aa60 00:26:26.170 [2024-07-15 10:07:39.714700] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.170 [2024-07-15 10:07:39.714703] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.170 [2024-07-15 10:07:39.714706] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f6aa60) 00:26:26.170 [2024-07-15 10:07:39.714712] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.170 [2024-07-15 10:07:39.714731] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fadcc0, cid 3, qid 0 00:26:26.170 [2024-07-15 10:07:39.714781] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.170 [2024-07-15 10:07:39.714785] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.170 [2024-07-15 10:07:39.714787] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.170 [2024-07-15 10:07:39.714790] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fadcc0) on tqpair=0x1f6aa60 00:26:26.170 [2024-07-15 10:07:39.714795] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:26:26.170 0% 00:26:26.170 Data Units Read: 0 00:26:26.170 Data Units Written: 0 00:26:26.170 Host Read Commands: 0 00:26:26.170 Host Write Commands: 0 00:26:26.170 Controller Busy Time: 0 minutes 00:26:26.170 Power Cycles: 0 00:26:26.170 Power On Hours: 0 hours 00:26:26.170 Unsafe Shutdowns: 0 00:26:26.170 Unrecoverable Media Errors: 0 00:26:26.170 Lifetime Error Log Entries: 0 00:26:26.170 Warning Temperature Time: 0 minutes 00:26:26.170 Critical Temperature Time: 0 minutes 00:26:26.170 00:26:26.170 Number of Queues 00:26:26.170 ================ 00:26:26.170 Number of I/O Submission Queues: 127 00:26:26.170 Number of I/O Completion Queues: 127 00:26:26.170 00:26:26.170 Active Namespaces 00:26:26.170 ================= 00:26:26.170 Namespace ID:1 00:26:26.170 Error Recovery Timeout: Unlimited 00:26:26.170 Command Set Identifier: NVM (00h) 00:26:26.170 Deallocate: Supported 00:26:26.170 Deallocated/Unwritten Error: Not Supported 00:26:26.170 Deallocated Read Value: Unknown 00:26:26.170 Deallocate in Write Zeroes: Not Supported 00:26:26.170 Deallocated Guard Field: 0xFFFF 00:26:26.170 Flush: Supported 00:26:26.170 Reservation: Supported 00:26:26.170 Namespace Sharing Capabilities: Multiple Controllers 00:26:26.170 Size (in LBAs): 131072 (0GiB) 00:26:26.170 Capacity (in LBAs): 131072 (0GiB) 00:26:26.170 Utilization (in LBAs): 131072 (0GiB) 00:26:26.170 NGUID: ABCDEF0123456789ABCDEF0123456789 00:26:26.170 EUI64: ABCDEF0123456789 00:26:26.170 UUID: 2d9de51d-f869-4427-99b3-75e2fc81ec4b 00:26:26.170 Thin Provisioning: Not Supported 00:26:26.170 Per-NS Atomic Units: Yes 00:26:26.170 Atomic Boundary Size (Normal): 0 00:26:26.170 Atomic Boundary Size (PFail): 0 00:26:26.170 Atomic Boundary Offset: 0 00:26:26.170 Maximum Single Source Range Length: 65535 00:26:26.170 Maximum Copy Length: 65535 00:26:26.170 Maximum Source Range Count: 1 00:26:26.170 NGUID/EUI64 Never Reused: No 00:26:26.170 Namespace Write Protected: No 00:26:26.170 Number of LBA Formats: 1 00:26:26.170 Current LBA Format: LBA Format #00 00:26:26.170 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:26.170 00:26:26.170 10:07:39 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:26:26.431 10:07:39 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:26.431 10:07:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.431 10:07:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:26.431 10:07:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.431 10:07:39 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:26:26.431 10:07:39 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:26:26.431 10:07:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:26.431 10:07:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:26:26.431 10:07:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:26.431 10:07:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:26:26.431 10:07:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:26.431 10:07:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:26.431 rmmod nvme_tcp 00:26:26.431 rmmod nvme_fabrics 00:26:26.431 rmmod nvme_keyring 00:26:26.431 10:07:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:26.431 10:07:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:26:26.431 10:07:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:26:26.431 10:07:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 86684 ']' 00:26:26.431 10:07:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 86684 00:26:26.431 10:07:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 86684 ']' 00:26:26.431 10:07:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 86684 00:26:26.431 10:07:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:26:26.431 10:07:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:26.431 10:07:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86684 00:26:26.431 10:07:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:26.431 10:07:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:26.431 10:07:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86684' 00:26:26.431 killing process with pid 86684 00:26:26.431 10:07:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 86684 00:26:26.431 10:07:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 86684 00:26:26.691 10:07:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:26.691 10:07:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:26.691 10:07:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:26.691 10:07:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:26.691 10:07:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:26.691 10:07:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:26.691 10:07:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:26.691 10:07:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:26.691 10:07:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:26:26.691 00:26:26.691 real 0m2.584s 00:26:26.691 user 0m6.883s 00:26:26.691 sys 0m0.706s 00:26:26.691 10:07:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:26.691 10:07:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:26.691 ************************************ 00:26:26.691 END TEST nvmf_identify 00:26:26.691 ************************************ 00:26:26.691 10:07:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:26.691 10:07:40 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:26:26.691 10:07:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:26.691 10:07:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:26.691 10:07:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:26.691 ************************************ 00:26:26.691 START TEST nvmf_perf 00:26:26.691 ************************************ 00:26:26.691 10:07:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:26:26.951 * Looking for test storage... 00:26:26.951 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:26.951 10:07:40 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:26.951 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:26:26.951 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:26.951 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:26.951 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:26.951 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:26.951 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:26.951 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:26.951 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:26.951 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:26.951 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:26.951 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:26.951 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:26:26.951 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:26:26.951 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:26.951 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:26.951 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:26.951 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:26.951 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:26.951 10:07:40 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:26.951 10:07:40 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:26.951 10:07:40 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:26.951 10:07:40 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.951 10:07:40 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.952 10:07:40 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.952 10:07:40 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:26:26.952 10:07:40 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.952 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:26:26.952 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:26.952 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:26.952 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:26.952 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:26.952 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:26.952 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:26.952 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:26.952 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:26.952 10:07:40 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:26.952 10:07:40 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:26.952 10:07:40 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:26.952 10:07:40 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:26:26.952 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:26.952 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:26.952 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:26.952 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:26.952 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:26.952 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:26.952 10:07:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:26.952 10:07:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:26.952 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:26:26.952 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:26:26.952 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:26:26.952 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:26:26.952 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:26:26.952 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:26:26.952 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:26.952 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:26.952 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:26.952 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:26.952 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:26.952 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:26.952 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:26.952 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:26.952 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:26.952 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:26.952 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:26.952 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:26.952 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:26.952 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:26.952 Cannot find device "nvmf_tgt_br" 00:26:26.952 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # true 00:26:26.952 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:26.952 Cannot find device "nvmf_tgt_br2" 00:26:26.952 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # true 00:26:26.952 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:26.952 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:26.952 Cannot find device "nvmf_tgt_br" 00:26:26.952 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # true 00:26:26.952 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:26.952 Cannot find device "nvmf_tgt_br2" 00:26:26.952 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # true 00:26:26.952 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:26.952 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:26.952 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:26.952 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:26.952 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # true 00:26:26.952 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:26.952 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:26.952 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # true 00:26:26.952 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:26.952 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:27.213 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:27.213 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:27.213 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:27.213 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:27.213 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:27.213 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:27.213 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:27.213 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:27.213 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:27.213 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:27.213 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:27.213 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:27.213 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:27.213 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:27.213 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:27.213 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:27.213 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:27.213 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:27.213 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:27.213 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:27.213 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:27.213 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:27.213 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:27.213 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:26:27.213 00:26:27.213 --- 10.0.0.2 ping statistics --- 00:26:27.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:27.213 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:26:27.213 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:27.213 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:27.213 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:26:27.213 00:26:27.213 --- 10.0.0.3 ping statistics --- 00:26:27.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:27.213 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:26:27.213 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:27.213 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:27.213 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:26:27.213 00:26:27.213 --- 10.0.0.1 ping statistics --- 00:26:27.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:27.213 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:26:27.213 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:27.213 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:26:27.213 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:27.213 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:27.213 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:27.213 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:27.213 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:27.213 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:27.213 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:27.213 10:07:40 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:26:27.213 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:27.213 10:07:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:27.213 10:07:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:27.213 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=86912 00:26:27.213 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:27.213 10:07:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 86912 00:26:27.213 10:07:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 86912 ']' 00:26:27.213 10:07:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:27.213 10:07:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:27.213 10:07:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:27.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:27.213 10:07:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:27.213 10:07:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:27.473 [2024-07-15 10:07:40.823717] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:26:27.473 [2024-07-15 10:07:40.823787] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:27.473 [2024-07-15 10:07:40.961539] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:27.733 [2024-07-15 10:07:41.065361] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:27.733 [2024-07-15 10:07:41.065410] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:27.733 [2024-07-15 10:07:41.065416] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:27.733 [2024-07-15 10:07:41.065421] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:27.733 [2024-07-15 10:07:41.065425] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:27.733 [2024-07-15 10:07:41.065847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:27.733 [2024-07-15 10:07:41.066001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:27.733 [2024-07-15 10:07:41.066041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:27.733 [2024-07-15 10:07:41.066044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:28.341 10:07:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:28.341 10:07:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:26:28.341 10:07:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:28.341 10:07:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:28.341 10:07:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:28.341 10:07:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:28.342 10:07:41 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:26:28.342 10:07:41 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:26:28.604 10:07:42 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:26:28.604 10:07:42 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:26:28.864 10:07:42 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:26:28.864 10:07:42 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:29.124 10:07:42 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:26:29.124 10:07:42 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:26:29.124 10:07:42 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:26:29.124 10:07:42 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:26:29.124 10:07:42 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:29.384 [2024-07-15 10:07:42.714242] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:29.384 10:07:42 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:29.384 10:07:42 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:29.384 10:07:42 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:29.643 10:07:43 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:29.643 10:07:43 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:26:29.902 10:07:43 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:30.161 [2024-07-15 10:07:43.517793] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:30.161 10:07:43 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:30.419 10:07:43 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:26:30.419 10:07:43 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:26:30.419 10:07:43 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:26:30.419 10:07:43 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:26:31.354 Initializing NVMe Controllers 00:26:31.354 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:26:31.354 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:26:31.354 Initialization complete. Launching workers. 00:26:31.354 ======================================================== 00:26:31.354 Latency(us) 00:26:31.354 Device Information : IOPS MiB/s Average min max 00:26:31.354 PCIE (0000:00:10.0) NSID 1 from core 0: 19968.00 78.00 1603.13 569.75 8374.92 00:26:31.354 ======================================================== 00:26:31.354 Total : 19968.00 78.00 1603.13 569.75 8374.92 00:26:31.354 00:26:31.354 10:07:44 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:32.731 Initializing NVMe Controllers 00:26:32.731 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:32.731 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:32.731 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:32.731 Initialization complete. Launching workers. 00:26:32.731 ======================================================== 00:26:32.731 Latency(us) 00:26:32.731 Device Information : IOPS MiB/s Average min max 00:26:32.731 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5187.93 20.27 192.54 73.61 4195.49 00:26:32.731 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.76 0.48 8144.18 7970.29 12073.06 00:26:32.731 ======================================================== 00:26:32.731 Total : 5311.69 20.75 377.81 73.61 12073.06 00:26:32.731 00:26:32.731 10:07:46 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:34.126 Initializing NVMe Controllers 00:26:34.126 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:34.126 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:34.126 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:34.126 Initialization complete. Launching workers. 00:26:34.126 ======================================================== 00:26:34.126 Latency(us) 00:26:34.126 Device Information : IOPS MiB/s Average min max 00:26:34.126 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10767.77 42.06 2972.83 554.77 6400.66 00:26:34.126 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2685.94 10.49 12018.02 7510.19 20180.77 00:26:34.126 ======================================================== 00:26:34.126 Total : 13453.72 52.55 4778.64 554.77 20180.77 00:26:34.126 00:26:34.126 10:07:47 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:26:34.126 10:07:47 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:36.674 Initializing NVMe Controllers 00:26:36.674 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:36.674 Controller IO queue size 128, less than required. 00:26:36.674 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:36.674 Controller IO queue size 128, less than required. 00:26:36.674 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:36.674 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:36.674 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:36.674 Initialization complete. Launching workers. 00:26:36.674 ======================================================== 00:26:36.674 Latency(us) 00:26:36.674 Device Information : IOPS MiB/s Average min max 00:26:36.674 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2013.90 503.48 64233.55 38424.67 156955.32 00:26:36.674 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 574.19 143.55 233448.07 105181.20 383014.60 00:26:36.674 ======================================================== 00:26:36.674 Total : 2588.09 647.02 101775.06 38424.67 383014.60 00:26:36.674 00:26:36.674 10:07:50 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:26:36.933 Initializing NVMe Controllers 00:26:36.933 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:36.933 Controller IO queue size 128, less than required. 00:26:36.933 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:36.933 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:26:36.933 Controller IO queue size 128, less than required. 00:26:36.933 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:36.933 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:26:36.933 WARNING: Some requested NVMe devices were skipped 00:26:36.933 No valid NVMe controllers or AIO or URING devices found 00:26:36.933 10:07:50 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:26:39.465 Initializing NVMe Controllers 00:26:39.465 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:39.465 Controller IO queue size 128, less than required. 00:26:39.465 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:39.465 Controller IO queue size 128, less than required. 00:26:39.465 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:39.465 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:39.465 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:39.465 Initialization complete. Launching workers. 00:26:39.465 00:26:39.465 ==================== 00:26:39.465 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:26:39.465 TCP transport: 00:26:39.465 polls: 18915 00:26:39.465 idle_polls: 11713 00:26:39.465 sock_completions: 7202 00:26:39.465 nvme_completions: 4287 00:26:39.465 submitted_requests: 6426 00:26:39.465 queued_requests: 1 00:26:39.465 00:26:39.465 ==================== 00:26:39.465 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:26:39.465 TCP transport: 00:26:39.465 polls: 17307 00:26:39.465 idle_polls: 12530 00:26:39.465 sock_completions: 4777 00:26:39.465 nvme_completions: 7993 00:26:39.465 submitted_requests: 11966 00:26:39.465 queued_requests: 1 00:26:39.465 ======================================================== 00:26:39.465 Latency(us) 00:26:39.465 Device Information : IOPS MiB/s Average min max 00:26:39.465 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1069.42 267.36 122807.21 62176.66 249820.92 00:26:39.465 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1994.13 498.53 64707.94 36578.52 119103.64 00:26:39.465 ======================================================== 00:26:39.465 Total : 3063.55 765.89 84989.21 36578.52 249820.92 00:26:39.465 00:26:39.465 10:07:52 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:26:39.465 10:07:52 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:39.728 10:07:53 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:26:39.728 10:07:53 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:26:39.728 10:07:53 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:26:39.728 10:07:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:39.728 10:07:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:26:39.728 10:07:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:39.728 10:07:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:26:39.728 10:07:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:39.728 10:07:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:39.728 rmmod nvme_tcp 00:26:39.728 rmmod nvme_fabrics 00:26:39.728 rmmod nvme_keyring 00:26:39.728 10:07:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:39.728 10:07:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:26:39.728 10:07:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:26:39.728 10:07:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 86912 ']' 00:26:39.728 10:07:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 86912 00:26:39.728 10:07:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 86912 ']' 00:26:39.728 10:07:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 86912 00:26:39.728 10:07:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:26:39.728 10:07:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:39.728 10:07:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86912 00:26:39.728 killing process with pid 86912 00:26:39.728 10:07:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:39.728 10:07:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:39.728 10:07:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86912' 00:26:39.728 10:07:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 86912 00:26:39.728 10:07:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 86912 00:26:41.109 10:07:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:41.109 10:07:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:41.109 10:07:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:41.109 10:07:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:41.109 10:07:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:41.109 10:07:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:41.109 10:07:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:41.109 10:07:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:41.109 10:07:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:26:41.109 ************************************ 00:26:41.109 END TEST nvmf_perf 00:26:41.109 ************************************ 00:26:41.109 00:26:41.109 real 0m14.408s 00:26:41.109 user 0m52.658s 00:26:41.109 sys 0m3.440s 00:26:41.109 10:07:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:41.109 10:07:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:41.109 10:07:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:41.109 10:07:54 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:26:41.109 10:07:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:41.109 10:07:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:41.109 10:07:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:41.109 ************************************ 00:26:41.109 START TEST nvmf_fio_host 00:26:41.109 ************************************ 00:26:41.109 10:07:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:26:41.369 * Looking for test storage... 00:26:41.369 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:41.369 10:07:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:41.369 10:07:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:41.369 10:07:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:41.369 10:07:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:41.369 10:07:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.369 10:07:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.369 10:07:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.369 10:07:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:26:41.369 10:07:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.369 10:07:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:41.369 10:07:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:26:41.369 10:07:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:41.369 10:07:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:41.369 10:07:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:41.369 10:07:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:41.369 10:07:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:41.369 10:07:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:41.369 10:07:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:41.369 10:07:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:41.369 10:07:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:41.369 10:07:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:41.369 10:07:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:26:41.369 10:07:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:26:41.369 10:07:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:41.369 10:07:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:41.369 10:07:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:41.369 10:07:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:41.369 10:07:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:41.369 10:07:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:41.369 10:07:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:41.369 10:07:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:41.369 10:07:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.369 10:07:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.369 10:07:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.369 10:07:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:26:41.369 10:07:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.369 10:07:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:26:41.369 10:07:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:41.369 10:07:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:41.369 10:07:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:41.369 10:07:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:41.369 10:07:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:41.369 10:07:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:41.369 10:07:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:41.369 10:07:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:41.369 10:07:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:41.369 10:07:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:26:41.369 10:07:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:41.369 10:07:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:41.369 10:07:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:41.369 10:07:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:41.369 10:07:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:41.369 10:07:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:41.369 10:07:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:41.369 10:07:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:41.369 10:07:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:26:41.369 10:07:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:26:41.369 10:07:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:26:41.369 10:07:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:26:41.370 10:07:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:26:41.370 10:07:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:26:41.370 10:07:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:41.370 10:07:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:41.370 10:07:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:41.370 10:07:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:41.370 10:07:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:41.370 10:07:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:41.370 10:07:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:41.370 10:07:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:41.370 10:07:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:41.370 10:07:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:41.370 10:07:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:41.370 10:07:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:41.370 10:07:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:41.370 10:07:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:41.370 Cannot find device "nvmf_tgt_br" 00:26:41.370 10:07:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:26:41.370 10:07:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:41.370 Cannot find device "nvmf_tgt_br2" 00:26:41.370 10:07:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:26:41.370 10:07:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:41.370 10:07:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:41.370 Cannot find device "nvmf_tgt_br" 00:26:41.370 10:07:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:26:41.370 10:07:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:41.370 Cannot find device "nvmf_tgt_br2" 00:26:41.370 10:07:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:26:41.370 10:07:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:41.629 10:07:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:41.629 10:07:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:41.629 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:41.629 10:07:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:26:41.629 10:07:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:41.629 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:41.629 10:07:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:26:41.629 10:07:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:41.629 10:07:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:41.629 10:07:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:41.629 10:07:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:41.629 10:07:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:41.629 10:07:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:41.629 10:07:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:41.629 10:07:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:41.629 10:07:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:41.629 10:07:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:41.629 10:07:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:41.629 10:07:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:41.629 10:07:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:41.629 10:07:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:41.629 10:07:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:41.629 10:07:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:41.629 10:07:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:41.629 10:07:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:41.629 10:07:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:41.629 10:07:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:41.629 10:07:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:41.629 10:07:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:41.629 10:07:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:41.629 10:07:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:41.629 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:41.629 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:26:41.629 00:26:41.629 --- 10.0.0.2 ping statistics --- 00:26:41.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.629 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:26:41.629 10:07:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:41.629 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:41.629 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:26:41.629 00:26:41.629 --- 10.0.0.3 ping statistics --- 00:26:41.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.629 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:26:41.629 10:07:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:41.629 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:41.629 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:26:41.629 00:26:41.629 --- 10.0.0.1 ping statistics --- 00:26:41.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.629 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:26:41.629 10:07:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:41.629 10:07:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:26:41.629 10:07:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:41.629 10:07:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:41.629 10:07:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:41.629 10:07:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:41.629 10:07:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:41.629 10:07:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:41.629 10:07:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:41.890 10:07:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:26:41.890 10:07:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:26:41.890 10:07:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:41.890 10:07:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.890 10:07:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=87399 00:26:41.890 10:07:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:41.890 10:07:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 87399 00:26:41.890 10:07:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 87399 ']' 00:26:41.890 10:07:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:41.890 10:07:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:41.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:41.890 10:07:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:41.890 10:07:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:41.890 10:07:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.890 10:07:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:41.890 [2024-07-15 10:07:55.284387] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:26:41.890 [2024-07-15 10:07:55.284455] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:41.890 [2024-07-15 10:07:55.415987] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:42.149 [2024-07-15 10:07:55.521935] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:42.149 [2024-07-15 10:07:55.522066] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:42.149 [2024-07-15 10:07:55.522106] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:42.149 [2024-07-15 10:07:55.522154] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:42.149 [2024-07-15 10:07:55.522175] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:42.149 [2024-07-15 10:07:55.522316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:42.149 [2024-07-15 10:07:55.522469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:42.149 [2024-07-15 10:07:55.522533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:42.149 [2024-07-15 10:07:55.522543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:42.716 10:07:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:42.716 10:07:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:26:42.716 10:07:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:42.975 [2024-07-15 10:07:56.303467] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:42.975 10:07:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:26:42.975 10:07:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:42.975 10:07:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.975 10:07:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:26:43.234 Malloc1 00:26:43.234 10:07:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:43.234 10:07:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:43.494 10:07:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:43.752 [2024-07-15 10:07:57.164069] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:43.752 10:07:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:44.010 10:07:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:26:44.011 10:07:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:44.011 10:07:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:44.011 10:07:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:44.011 10:07:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:44.011 10:07:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:44.011 10:07:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:26:44.011 10:07:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:26:44.011 10:07:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:44.011 10:07:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:44.011 10:07:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:26:44.011 10:07:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:26:44.011 10:07:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:44.011 10:07:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:44.011 10:07:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:44.011 10:07:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:44.011 10:07:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:26:44.011 10:07:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:44.011 10:07:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:44.011 10:07:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:44.011 10:07:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:44.011 10:07:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:26:44.011 10:07:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:44.011 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:26:44.011 fio-3.35 00:26:44.011 Starting 1 thread 00:26:46.550 00:26:46.550 test: (groupid=0, jobs=1): err= 0: pid=87531: Mon Jul 15 10:07:59 2024 00:26:46.550 read: IOPS=11.3k, BW=44.3MiB/s (46.4MB/s)(88.8MiB/2005msec) 00:26:46.550 slat (nsec): min=1500, max=439237, avg=1846.71, stdev=3796.70 00:26:46.550 clat (usec): min=4427, max=11835, avg=5930.22, stdev=558.17 00:26:46.550 lat (usec): min=4430, max=11842, avg=5932.06, stdev=558.51 00:26:46.550 clat percentiles (usec): 00:26:46.550 | 1.00th=[ 4948], 5.00th=[ 5211], 10.00th=[ 5342], 20.00th=[ 5538], 00:26:46.550 | 30.00th=[ 5669], 40.00th=[ 5735], 50.00th=[ 5866], 60.00th=[ 5932], 00:26:46.550 | 70.00th=[ 6128], 80.00th=[ 6259], 90.00th=[ 6587], 95.00th=[ 6915], 00:26:46.550 | 99.00th=[ 7635], 99.50th=[ 8094], 99.90th=[10028], 99.95th=[10421], 00:26:46.550 | 99.99th=[11731] 00:26:46.550 bw ( KiB/s): min=44176, max=46416, per=99.86%, avg=45277.50, stdev=962.05, samples=4 00:26:46.550 iops : min=11044, max=11604, avg=11319.25, stdev=240.58, samples=4 00:26:46.550 write: IOPS=11.3k, BW=44.0MiB/s (46.1MB/s)(88.2MiB/2005msec); 0 zone resets 00:26:46.550 slat (nsec): min=1547, max=350691, avg=1909.82, stdev=2627.46 00:26:46.550 clat (usec): min=3536, max=9435, avg=5352.81, stdev=461.22 00:26:46.550 lat (usec): min=3553, max=9437, avg=5354.72, stdev=461.48 00:26:46.550 clat percentiles (usec): 00:26:46.550 | 1.00th=[ 4424], 5.00th=[ 4752], 10.00th=[ 4883], 20.00th=[ 5014], 00:26:46.550 | 30.00th=[ 5080], 40.00th=[ 5211], 50.00th=[ 5276], 60.00th=[ 5407], 00:26:46.550 | 70.00th=[ 5473], 80.00th=[ 5669], 90.00th=[ 5932], 95.00th=[ 6194], 00:26:46.550 | 99.00th=[ 6718], 99.50th=[ 7111], 99.90th=[ 8094], 99.95th=[ 8455], 00:26:46.550 | 99.99th=[ 8979] 00:26:46.550 bw ( KiB/s): min=44512, max=45536, per=99.98%, avg=45037.25, stdev=445.18, samples=4 00:26:46.550 iops : min=11128, max=11384, avg=11259.25, stdev=111.26, samples=4 00:26:46.550 lat (msec) : 4=0.05%, 10=99.90%, 20=0.06% 00:26:46.550 cpu : usr=72.50%, sys=20.96%, ctx=15, majf=0, minf=6 00:26:46.550 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:26:46.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.550 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:46.550 issued rwts: total=22728,22579,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:46.550 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:46.550 00:26:46.550 Run status group 0 (all jobs): 00:26:46.550 READ: bw=44.3MiB/s (46.4MB/s), 44.3MiB/s-44.3MiB/s (46.4MB/s-46.4MB/s), io=88.8MiB (93.1MB), run=2005-2005msec 00:26:46.550 WRITE: bw=44.0MiB/s (46.1MB/s), 44.0MiB/s-44.0MiB/s (46.1MB/s-46.1MB/s), io=88.2MiB (92.5MB), run=2005-2005msec 00:26:46.550 10:07:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:46.550 10:07:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:46.550 10:07:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:46.550 10:07:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:46.550 10:07:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:46.550 10:07:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:26:46.550 10:07:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:26:46.550 10:07:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:46.550 10:07:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:46.550 10:07:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:26:46.550 10:07:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:26:46.550 10:07:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:46.551 10:07:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:46.551 10:07:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:46.551 10:07:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:46.551 10:07:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:26:46.551 10:07:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:46.551 10:07:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:46.551 10:07:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:46.551 10:07:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:46.551 10:07:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:26:46.551 10:07:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:46.551 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:26:46.551 fio-3.35 00:26:46.551 Starting 1 thread 00:26:49.087 00:26:49.087 test: (groupid=0, jobs=1): err= 0: pid=87577: Mon Jul 15 10:08:02 2024 00:26:49.087 read: IOPS=10.6k, BW=166MiB/s (174MB/s)(332MiB/2005msec) 00:26:49.087 slat (nsec): min=2437, max=91799, avg=2890.08, stdev=1632.86 00:26:49.087 clat (usec): min=1898, max=17410, avg=7165.54, stdev=1836.17 00:26:49.087 lat (usec): min=1901, max=17428, avg=7168.43, stdev=1836.42 00:26:49.087 clat percentiles (usec): 00:26:49.087 | 1.00th=[ 3621], 5.00th=[ 4424], 10.00th=[ 4883], 20.00th=[ 5538], 00:26:49.087 | 30.00th=[ 6063], 40.00th=[ 6587], 50.00th=[ 7177], 60.00th=[ 7701], 00:26:49.087 | 70.00th=[ 8160], 80.00th=[ 8455], 90.00th=[ 9241], 95.00th=[10290], 00:26:49.087 | 99.00th=[12387], 99.50th=[13042], 99.90th=[15664], 99.95th=[16909], 00:26:49.087 | 99.99th=[17433] 00:26:49.087 bw ( KiB/s): min=76960, max=95552, per=49.39%, avg=83776.00, stdev=8576.36, samples=4 00:26:49.087 iops : min= 4810, max= 5972, avg=5236.00, stdev=536.02, samples=4 00:26:49.087 write: IOPS=6187, BW=96.7MiB/s (101MB/s)(172MiB/1774msec); 0 zone resets 00:26:49.087 slat (usec): min=28, max=549, avg=32.04, stdev=10.42 00:26:49.087 clat (usec): min=2496, max=18633, avg=8715.20, stdev=1696.76 00:26:49.087 lat (usec): min=2526, max=18779, avg=8747.24, stdev=1700.03 00:26:49.087 clat percentiles (usec): 00:26:49.087 | 1.00th=[ 5866], 5.00th=[ 6521], 10.00th=[ 6849], 20.00th=[ 7308], 00:26:49.087 | 30.00th=[ 7701], 40.00th=[ 8029], 50.00th=[ 8455], 60.00th=[ 8848], 00:26:49.087 | 70.00th=[ 9372], 80.00th=[10028], 90.00th=[10945], 95.00th=[11731], 00:26:49.087 | 99.00th=[13829], 99.50th=[14353], 99.90th=[17957], 99.95th=[18220], 00:26:49.087 | 99.99th=[18482] 00:26:49.087 bw ( KiB/s): min=79936, max=99328, per=88.30%, avg=87408.00, stdev=9029.28, samples=4 00:26:49.087 iops : min= 4996, max= 6208, avg=5463.00, stdev=564.33, samples=4 00:26:49.087 lat (msec) : 2=0.01%, 4=1.51%, 10=87.74%, 20=10.75% 00:26:49.087 cpu : usr=76.55%, sys=15.62%, ctx=55, majf=0, minf=22 00:26:49.087 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:26:49.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:49.087 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:49.087 issued rwts: total=21257,10976,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:49.087 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:49.087 00:26:49.087 Run status group 0 (all jobs): 00:26:49.087 READ: bw=166MiB/s (174MB/s), 166MiB/s-166MiB/s (174MB/s-174MB/s), io=332MiB (348MB), run=2005-2005msec 00:26:49.087 WRITE: bw=96.7MiB/s (101MB/s), 96.7MiB/s-96.7MiB/s (101MB/s-101MB/s), io=172MiB (180MB), run=1774-1774msec 00:26:49.087 10:08:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:49.087 10:08:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:26:49.087 10:08:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:26:49.087 10:08:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:26:49.087 10:08:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:26:49.087 10:08:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:49.087 10:08:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:26:49.087 10:08:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:49.087 10:08:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:26:49.087 10:08:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:49.087 10:08:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:49.087 rmmod nvme_tcp 00:26:49.087 rmmod nvme_fabrics 00:26:49.087 rmmod nvme_keyring 00:26:49.347 10:08:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:49.347 10:08:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:26:49.347 10:08:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:26:49.347 10:08:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 87399 ']' 00:26:49.347 10:08:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 87399 00:26:49.347 10:08:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 87399 ']' 00:26:49.347 10:08:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 87399 00:26:49.348 10:08:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:26:49.348 10:08:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:49.348 10:08:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87399 00:26:49.348 killing process with pid 87399 00:26:49.348 10:08:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:49.348 10:08:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:49.348 10:08:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87399' 00:26:49.348 10:08:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 87399 00:26:49.348 10:08:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 87399 00:26:49.607 10:08:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:49.607 10:08:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:49.607 10:08:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:49.607 10:08:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:49.607 10:08:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:49.607 10:08:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:49.607 10:08:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:49.607 10:08:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:49.607 10:08:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:26:49.607 00:26:49.607 real 0m8.343s 00:26:49.607 user 0m33.742s 00:26:49.607 sys 0m2.103s 00:26:49.607 10:08:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:49.607 ************************************ 00:26:49.607 END TEST nvmf_fio_host 00:26:49.607 ************************************ 00:26:49.607 10:08:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.607 10:08:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:49.607 10:08:03 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:26:49.607 10:08:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:49.607 10:08:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:49.607 10:08:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:49.607 ************************************ 00:26:49.607 START TEST nvmf_failover 00:26:49.607 ************************************ 00:26:49.607 10:08:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:26:49.865 * Looking for test storage... 00:26:49.865 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:49.865 Cannot find device "nvmf_tgt_br" 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # true 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:49.865 Cannot find device "nvmf_tgt_br2" 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # true 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:49.865 Cannot find device "nvmf_tgt_br" 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # true 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:49.865 Cannot find device "nvmf_tgt_br2" 00:26:49.865 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # true 00:26:49.866 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:49.866 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:49.866 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:49.866 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:49.866 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # true 00:26:49.866 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:49.866 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:49.866 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # true 00:26:49.866 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:49.866 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:49.866 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:49.866 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:50.124 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:50.124 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:50.124 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:50.124 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:50.124 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:50.124 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:50.124 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:50.124 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:50.124 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:50.124 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:50.124 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:50.124 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:50.124 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:50.124 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:50.125 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:50.125 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:50.125 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:50.125 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:50.125 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:50.125 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:50.125 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:50.125 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:26:50.125 00:26:50.125 --- 10.0.0.2 ping statistics --- 00:26:50.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:50.125 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:26:50.125 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:50.125 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:50.125 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:26:50.125 00:26:50.125 --- 10.0.0.3 ping statistics --- 00:26:50.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:50.125 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:26:50.125 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:50.125 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:50.125 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:26:50.125 00:26:50.125 --- 10.0.0.1 ping statistics --- 00:26:50.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:50.125 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:26:50.125 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:50.125 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:26:50.125 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:50.125 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:50.125 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:50.125 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:50.125 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:50.125 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:50.125 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:50.125 10:08:03 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:26:50.125 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:50.125 10:08:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:50.125 10:08:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:50.125 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=87796 00:26:50.125 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 87796 00:26:50.125 10:08:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 87796 ']' 00:26:50.125 10:08:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:50.125 10:08:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:50.125 10:08:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:50.125 10:08:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:50.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:50.125 10:08:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:50.125 10:08:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:50.125 [2024-07-15 10:08:03.643196] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:26:50.125 [2024-07-15 10:08:03.643260] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:50.384 [2024-07-15 10:08:03.782823] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:50.384 [2024-07-15 10:08:03.886052] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:50.384 [2024-07-15 10:08:03.886100] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:50.384 [2024-07-15 10:08:03.886106] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:50.384 [2024-07-15 10:08:03.886111] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:50.384 [2024-07-15 10:08:03.886115] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:50.384 [2024-07-15 10:08:03.886337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:50.384 [2024-07-15 10:08:03.886516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:50.384 [2024-07-15 10:08:03.886538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:50.969 10:08:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:50.969 10:08:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:26:50.969 10:08:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:50.969 10:08:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:50.969 10:08:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:51.245 10:08:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:51.245 10:08:04 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:51.245 [2024-07-15 10:08:04.730775] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:51.245 10:08:04 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:51.503 Malloc0 00:26:51.503 10:08:04 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:51.762 10:08:05 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:51.762 10:08:05 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:52.020 [2024-07-15 10:08:05.513408] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:52.020 10:08:05 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:52.279 [2024-07-15 10:08:05.697172] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:52.279 10:08:05 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:52.537 [2024-07-15 10:08:05.880977] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:52.537 10:08:05 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=87908 00:26:52.537 10:08:05 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:26:52.537 10:08:05 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:52.537 10:08:05 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 87908 /var/tmp/bdevperf.sock 00:26:52.537 10:08:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 87908 ']' 00:26:52.537 10:08:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:52.537 10:08:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:52.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:52.537 10:08:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:52.537 10:08:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:52.537 10:08:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:53.473 10:08:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:53.474 10:08:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:26:53.474 10:08:06 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:53.474 NVMe0n1 00:26:53.732 10:08:07 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:53.732 00:26:53.732 10:08:07 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=87950 00:26:53.732 10:08:07 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:53.732 10:08:07 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:26:55.109 10:08:08 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:55.109 10:08:08 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:26:58.404 10:08:11 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:58.404 00:26:58.404 10:08:11 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:58.404 [2024-07-15 10:08:11.967050] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160c340 is same with the state(5) to be set 00:26:58.404 [2024-07-15 10:08:11.967097] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160c340 is same with the state(5) to be set 00:26:58.404 [2024-07-15 10:08:11.967104] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160c340 is same with the state(5) to be set 00:26:58.404 [2024-07-15 10:08:11.967109] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160c340 is same with the state(5) to be set 00:26:58.404 [2024-07-15 10:08:11.967114] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160c340 is same with the state(5) to be set 00:26:58.404 [2024-07-15 10:08:11.967118] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160c340 is same with the state(5) to be set 00:26:58.404 [2024-07-15 10:08:11.967123] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160c340 is same with the state(5) to be set 00:26:58.404 [2024-07-15 10:08:11.967129] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160c340 is same with the state(5) to be set 00:26:58.404 [2024-07-15 10:08:11.967133] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160c340 is same with the state(5) to be set 00:26:58.404 [2024-07-15 10:08:11.967138] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160c340 is same with the state(5) to be set 00:26:58.404 [2024-07-15 10:08:11.967143] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160c340 is same with the state(5) to be set 00:26:58.404 [2024-07-15 10:08:11.967148] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160c340 is same with the state(5) to be set 00:26:58.404 [2024-07-15 10:08:11.967153] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160c340 is same with the state(5) to be set 00:26:58.404 [2024-07-15 10:08:11.967157] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160c340 is same with the state(5) to be set 00:26:58.404 [2024-07-15 10:08:11.967162] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160c340 is same with the state(5) to be set 00:26:58.404 [2024-07-15 10:08:11.967167] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160c340 is same with the state(5) to be set 00:26:58.404 [2024-07-15 10:08:11.967171] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160c340 is same with the state(5) to be set 00:26:58.404 [2024-07-15 10:08:11.967176] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160c340 is same with the state(5) to be set 00:26:58.404 [2024-07-15 10:08:11.967180] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160c340 is same with the state(5) to be set 00:26:58.404 [2024-07-15 10:08:11.967185] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160c340 is same with the state(5) to be set 00:26:58.404 [2024-07-15 10:08:11.967189] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160c340 is same with the state(5) to be set 00:26:58.404 [2024-07-15 10:08:11.967194] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160c340 is same with the state(5) to be set 00:26:58.404 [2024-07-15 10:08:11.967199] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160c340 is same with the state(5) to be set 00:26:58.404 [2024-07-15 10:08:11.967203] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160c340 is same with the state(5) to be set 00:26:58.404 [2024-07-15 10:08:11.967207] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160c340 is same with the state(5) to be set 00:26:58.404 [2024-07-15 10:08:11.967212] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160c340 is same with the state(5) to be set 00:26:58.404 [2024-07-15 10:08:11.967216] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160c340 is same with the state(5) to be set 00:26:58.404 [2024-07-15 10:08:11.967221] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160c340 is same with the state(5) to be set 00:26:58.404 [2024-07-15 10:08:11.967225] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160c340 is same with the state(5) to be set 00:26:58.404 [2024-07-15 10:08:11.967230] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160c340 is same with the state(5) to be set 00:26:58.404 [2024-07-15 10:08:11.967234] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160c340 is same with the state(5) to be set 00:26:58.404 [2024-07-15 10:08:11.967239] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160c340 is same with the state(5) to be set 00:26:58.404 [2024-07-15 10:08:11.967243] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160c340 is same with the state(5) to be set 00:26:58.404 [2024-07-15 10:08:11.967247] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160c340 is same with the state(5) to be set 00:26:58.404 [2024-07-15 10:08:11.967253] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160c340 is same with the state(5) to be set 00:26:58.404 [2024-07-15 10:08:11.967257] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160c340 is same with the state(5) to be set 00:26:58.404 [2024-07-15 10:08:11.967262] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160c340 is same with the state(5) to be set 00:26:58.404 [2024-07-15 10:08:11.967267] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160c340 is same with the state(5) to be set 00:26:58.404 [2024-07-15 10:08:11.967288] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160c340 is same with the state(5) to be set 00:26:58.404 [2024-07-15 10:08:11.967294] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160c340 is same with the state(5) to be set 00:26:58.404 [2024-07-15 10:08:11.967299] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160c340 is same with the state(5) to be set 00:26:58.404 [2024-07-15 10:08:11.967305] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160c340 is same with the state(5) to be set 00:26:58.404 [2024-07-15 10:08:11.967310] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160c340 is same with the state(5) to be set 00:26:58.404 [2024-07-15 10:08:11.967315] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160c340 is same with the state(5) to be set 00:26:58.404 [2024-07-15 10:08:11.967320] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160c340 is same with the state(5) to be set 00:26:58.404 [2024-07-15 10:08:11.967325] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160c340 is same with the state(5) to be set 00:26:58.664 10:08:11 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:27:01.957 10:08:14 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:01.957 [2024-07-15 10:08:15.163345] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:01.957 10:08:15 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:27:02.896 10:08:16 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:02.896 [2024-07-15 10:08:16.377496] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.896 [2024-07-15 10:08:16.377548] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.896 [2024-07-15 10:08:16.377556] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.896 [2024-07-15 10:08:16.377563] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.896 [2024-07-15 10:08:16.377568] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.896 [2024-07-15 10:08:16.377573] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.896 [2024-07-15 10:08:16.377579] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.896 [2024-07-15 10:08:16.377584] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.896 [2024-07-15 10:08:16.377590] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.896 [2024-07-15 10:08:16.377595] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377600] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377605] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377610] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377615] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377632] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377637] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377641] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377645] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377650] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377655] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377660] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377665] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377678] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377683] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377687] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377692] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377697] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377701] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377706] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377710] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377715] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377719] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377724] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377729] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377734] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377738] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377743] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377747] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377752] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377756] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377761] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377765] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377770] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377774] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377778] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377783] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377788] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377792] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377796] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377801] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377805] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377810] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377814] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377819] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377823] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377828] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377832] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377836] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377841] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377845] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377849] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377854] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377858] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377863] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377867] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377872] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377876] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377881] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377886] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377890] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377895] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377899] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377904] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377909] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377914] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377918] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377923] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377928] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377932] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377937] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377941] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377946] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377951] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377956] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377960] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377965] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377969] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377973] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377978] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377983] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377987] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377991] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.897 [2024-07-15 10:08:16.377996] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.898 [2024-07-15 10:08:16.378000] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.898 [2024-07-15 10:08:16.378005] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.898 [2024-07-15 10:08:16.378009] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.898 [2024-07-15 10:08:16.378014] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.898 [2024-07-15 10:08:16.378018] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.898 [2024-07-15 10:08:16.378022] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.898 [2024-07-15 10:08:16.378028] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.898 [2024-07-15 10:08:16.378032] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.898 [2024-07-15 10:08:16.378036] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.898 [2024-07-15 10:08:16.378041] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.898 [2024-07-15 10:08:16.378045] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.898 [2024-07-15 10:08:16.378051] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.898 [2024-07-15 10:08:16.378055] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.898 [2024-07-15 10:08:16.378060] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.898 [2024-07-15 10:08:16.378065] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.898 [2024-07-15 10:08:16.378069] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.898 [2024-07-15 10:08:16.378074] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.898 [2024-07-15 10:08:16.378078] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.898 [2024-07-15 10:08:16.378083] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.898 [2024-07-15 10:08:16.378088] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.898 [2024-07-15 10:08:16.378092] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.898 [2024-07-15 10:08:16.378098] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.898 [2024-07-15 10:08:16.378102] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.898 [2024-07-15 10:08:16.378107] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.898 [2024-07-15 10:08:16.378111] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.898 [2024-07-15 10:08:16.378116] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.898 [2024-07-15 10:08:16.378121] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.898 [2024-07-15 10:08:16.378125] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.898 [2024-07-15 10:08:16.378130] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.898 [2024-07-15 10:08:16.378134] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.898 [2024-07-15 10:08:16.378139] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.898 [2024-07-15 10:08:16.378143] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.898 [2024-07-15 10:08:16.378148] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca20 is same with the state(5) to be set 00:27:02.898 10:08:16 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 87950 00:27:09.474 0 00:27:09.474 10:08:22 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 87908 00:27:09.474 10:08:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 87908 ']' 00:27:09.474 10:08:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 87908 00:27:09.474 10:08:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:27:09.474 10:08:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:09.474 10:08:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87908 00:27:09.474 killing process with pid 87908 00:27:09.474 10:08:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:09.474 10:08:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:09.474 10:08:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87908' 00:27:09.474 10:08:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 87908 00:27:09.474 10:08:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 87908 00:27:09.474 10:08:22 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:27:09.474 [2024-07-15 10:08:05.954332] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:27:09.474 [2024-07-15 10:08:05.954424] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87908 ] 00:27:09.474 [2024-07-15 10:08:06.090625] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.474 [2024-07-15 10:08:06.186850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:09.474 Running I/O for 15 seconds... 00:27:09.474 [2024-07-15 10:08:08.492447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:109240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.474 [2024-07-15 10:08:08.492502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.474 [2024-07-15 10:08:08.492525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:109248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.474 [2024-07-15 10:08:08.492534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.474 [2024-07-15 10:08:08.492545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:109256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.474 [2024-07-15 10:08:08.492553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.474 [2024-07-15 10:08:08.492563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:109264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.474 [2024-07-15 10:08:08.492572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.474 [2024-07-15 10:08:08.492582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:109272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.474 [2024-07-15 10:08:08.492590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.474 [2024-07-15 10:08:08.492600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:109280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.474 [2024-07-15 10:08:08.492608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.474 [2024-07-15 10:08:08.492618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:109288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.474 [2024-07-15 10:08:08.492626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.474 [2024-07-15 10:08:08.492635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:108352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.474 [2024-07-15 10:08:08.492643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.474 [2024-07-15 10:08:08.492654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:108360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.474 [2024-07-15 10:08:08.492669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.474 [2024-07-15 10:08:08.492679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:108368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.474 [2024-07-15 10:08:08.492688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.474 [2024-07-15 10:08:08.492697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:108376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.474 [2024-07-15 10:08:08.492705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.474 [2024-07-15 10:08:08.492741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:108384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.474 [2024-07-15 10:08:08.492750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.474 [2024-07-15 10:08:08.492760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:108392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.474 [2024-07-15 10:08:08.492768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.474 [2024-07-15 10:08:08.492778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:108400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.474 [2024-07-15 10:08:08.492786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.474 [2024-07-15 10:08:08.492796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:108408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.474 [2024-07-15 10:08:08.492804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.474 [2024-07-15 10:08:08.492814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:108416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.474 [2024-07-15 10:08:08.492821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.474 [2024-07-15 10:08:08.492832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:108424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.474 [2024-07-15 10:08:08.492840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.474 [2024-07-15 10:08:08.492850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:108432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.474 [2024-07-15 10:08:08.492858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.474 [2024-07-15 10:08:08.492868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.474 [2024-07-15 10:08:08.492876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.474 [2024-07-15 10:08:08.492886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:108448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.474 [2024-07-15 10:08:08.492894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.474 [2024-07-15 10:08:08.492908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:108456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.474 [2024-07-15 10:08:08.492916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.474 [2024-07-15 10:08:08.492925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:108464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.474 [2024-07-15 10:08:08.492934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.474 [2024-07-15 10:08:08.492943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:108472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.474 [2024-07-15 10:08:08.492951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.474 [2024-07-15 10:08:08.492961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:108480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.474 [2024-07-15 10:08:08.492974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.474 [2024-07-15 10:08:08.492984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:108488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.474 [2024-07-15 10:08:08.492992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.474 [2024-07-15 10:08:08.493002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:108496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.474 [2024-07-15 10:08:08.493010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.474 [2024-07-15 10:08:08.493019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:108504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.474 [2024-07-15 10:08:08.493027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.474 [2024-07-15 10:08:08.493037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:108512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.475 [2024-07-15 10:08:08.493044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.475 [2024-07-15 10:08:08.493054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:108520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.475 [2024-07-15 10:08:08.493062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.475 [2024-07-15 10:08:08.493071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:108528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.475 [2024-07-15 10:08:08.493079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.475 [2024-07-15 10:08:08.493089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:109296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.475 [2024-07-15 10:08:08.493098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.475 [2024-07-15 10:08:08.493108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:108536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.475 [2024-07-15 10:08:08.493116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.475 [2024-07-15 10:08:08.493125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:108544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.475 [2024-07-15 10:08:08.493133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.475 [2024-07-15 10:08:08.493147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:108552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.475 [2024-07-15 10:08:08.493155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.475 [2024-07-15 10:08:08.493165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:108560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.475 [2024-07-15 10:08:08.493173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.475 [2024-07-15 10:08:08.493183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:108568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.475 [2024-07-15 10:08:08.493191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.475 [2024-07-15 10:08:08.493207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:108576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.475 [2024-07-15 10:08:08.493215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.475 [2024-07-15 10:08:08.493225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:108584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.475 [2024-07-15 10:08:08.493233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.475 [2024-07-15 10:08:08.493242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:108592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.475 [2024-07-15 10:08:08.493250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.475 [2024-07-15 10:08:08.493260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:108600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.475 [2024-07-15 10:08:08.493268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.475 [2024-07-15 10:08:08.493277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:108608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.475 [2024-07-15 10:08:08.493286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.475 [2024-07-15 10:08:08.493295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:108616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.475 [2024-07-15 10:08:08.493303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.475 [2024-07-15 10:08:08.493313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:108624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.475 [2024-07-15 10:08:08.493320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.475 [2024-07-15 10:08:08.493330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:108632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.475 [2024-07-15 10:08:08.493339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.475 [2024-07-15 10:08:08.493348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:108640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.475 [2024-07-15 10:08:08.493356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.475 [2024-07-15 10:08:08.493365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:108648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.475 [2024-07-15 10:08:08.493374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.475 [2024-07-15 10:08:08.493383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:108656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.475 [2024-07-15 10:08:08.493391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.475 [2024-07-15 10:08:08.493401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:108664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.475 [2024-07-15 10:08:08.493409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.475 [2024-07-15 10:08:08.493418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:108672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.475 [2024-07-15 10:08:08.493430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.475 [2024-07-15 10:08:08.493439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:108680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.475 [2024-07-15 10:08:08.493448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.475 [2024-07-15 10:08:08.493458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:108688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.475 [2024-07-15 10:08:08.493466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.475 [2024-07-15 10:08:08.493476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:108696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.475 [2024-07-15 10:08:08.493484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.475 [2024-07-15 10:08:08.493495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:108704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.475 [2024-07-15 10:08:08.493503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.475 [2024-07-15 10:08:08.493513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:108712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.475 [2024-07-15 10:08:08.493522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.475 [2024-07-15 10:08:08.493531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:108720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.475 [2024-07-15 10:08:08.493539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.475 [2024-07-15 10:08:08.493550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:109304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.475 [2024-07-15 10:08:08.493558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.475 [2024-07-15 10:08:08.493568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:109312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.475 [2024-07-15 10:08:08.493575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.475 [2024-07-15 10:08:08.493585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:109320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.475 [2024-07-15 10:08:08.493593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.475 [2024-07-15 10:08:08.493603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:109328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.475 [2024-07-15 10:08:08.493611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.475 [2024-07-15 10:08:08.493620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:109336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.475 [2024-07-15 10:08:08.493628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.475 [2024-07-15 10:08:08.493638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:109344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.475 [2024-07-15 10:08:08.493653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.475 [2024-07-15 10:08:08.493673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:109352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.475 [2024-07-15 10:08:08.493682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.475 [2024-07-15 10:08:08.493692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:109360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.475 [2024-07-15 10:08:08.493699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.475 [2024-07-15 10:08:08.493709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:109368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.475 [2024-07-15 10:08:08.493717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.475 [2024-07-15 10:08:08.493727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:108728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.475 [2024-07-15 10:08:08.493735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.475 [2024-07-15 10:08:08.493744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:108736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.475 [2024-07-15 10:08:08.493752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.475 [2024-07-15 10:08:08.493762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:108744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.475 [2024-07-15 10:08:08.493770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.475 [2024-07-15 10:08:08.493780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:108752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.475 [2024-07-15 10:08:08.493788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.475 [2024-07-15 10:08:08.493798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:108760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.476 [2024-07-15 10:08:08.493807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.476 [2024-07-15 10:08:08.493817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:108768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.476 [2024-07-15 10:08:08.493825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.476 [2024-07-15 10:08:08.493834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:108776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.476 [2024-07-15 10:08:08.493842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.476 [2024-07-15 10:08:08.493852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:108784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.476 [2024-07-15 10:08:08.493860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.476 [2024-07-15 10:08:08.493869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:108792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.476 [2024-07-15 10:08:08.493878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.476 [2024-07-15 10:08:08.493887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:108800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.476 [2024-07-15 10:08:08.493900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.476 [2024-07-15 10:08:08.493909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:108808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.476 [2024-07-15 10:08:08.493917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.476 [2024-07-15 10:08:08.493927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:108816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.476 [2024-07-15 10:08:08.493935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.476 [2024-07-15 10:08:08.493944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:108824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.476 [2024-07-15 10:08:08.493954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.476 [2024-07-15 10:08:08.493964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:108832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.476 [2024-07-15 10:08:08.493972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.476 [2024-07-15 10:08:08.493982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:108840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.476 [2024-07-15 10:08:08.493990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.476 [2024-07-15 10:08:08.494000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:108848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.476 [2024-07-15 10:08:08.494008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.476 [2024-07-15 10:08:08.494018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:108856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.476 [2024-07-15 10:08:08.494026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.476 [2024-07-15 10:08:08.494036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:108864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.476 [2024-07-15 10:08:08.494043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.476 [2024-07-15 10:08:08.494053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:108872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.476 [2024-07-15 10:08:08.494061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.476 [2024-07-15 10:08:08.494071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:108880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.476 [2024-07-15 10:08:08.494079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.476 [2024-07-15 10:08:08.494091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:108888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.476 [2024-07-15 10:08:08.494099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.476 [2024-07-15 10:08:08.494108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:108896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.476 [2024-07-15 10:08:08.494117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.476 [2024-07-15 10:08:08.494127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:108904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.476 [2024-07-15 10:08:08.494138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.476 [2024-07-15 10:08:08.494148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:108912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.476 [2024-07-15 10:08:08.494156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.476 [2024-07-15 10:08:08.494165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:108920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.476 [2024-07-15 10:08:08.494173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.476 [2024-07-15 10:08:08.494183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:108928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.476 [2024-07-15 10:08:08.494191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.476 [2024-07-15 10:08:08.494201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:108936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.476 [2024-07-15 10:08:08.494209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.476 [2024-07-15 10:08:08.494219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:108944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.476 [2024-07-15 10:08:08.494227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.476 [2024-07-15 10:08:08.494236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:108952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.476 [2024-07-15 10:08:08.494247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.476 [2024-07-15 10:08:08.494256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:108960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.476 [2024-07-15 10:08:08.494264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.476 [2024-07-15 10:08:08.494274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:108968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.476 [2024-07-15 10:08:08.494281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.476 [2024-07-15 10:08:08.494291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:108976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.476 [2024-07-15 10:08:08.494299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.476 [2024-07-15 10:08:08.494309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:108984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.476 [2024-07-15 10:08:08.494317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.476 [2024-07-15 10:08:08.494326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:108992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.476 [2024-07-15 10:08:08.494335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.476 [2024-07-15 10:08:08.494344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:109000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.476 [2024-07-15 10:08:08.494352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.476 [2024-07-15 10:08:08.494368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:109008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.476 [2024-07-15 10:08:08.494376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.476 [2024-07-15 10:08:08.494388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:109016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.476 [2024-07-15 10:08:08.494396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.476 [2024-07-15 10:08:08.494406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:109024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.476 [2024-07-15 10:08:08.494414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.476 [2024-07-15 10:08:08.494423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:109032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.476 [2024-07-15 10:08:08.494431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.476 [2024-07-15 10:08:08.494441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.476 [2024-07-15 10:08:08.494449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.476 [2024-07-15 10:08:08.494458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:109048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.476 [2024-07-15 10:08:08.494466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.476 [2024-07-15 10:08:08.494476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:109056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.476 [2024-07-15 10:08:08.494484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.476 [2024-07-15 10:08:08.494493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:109064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.476 [2024-07-15 10:08:08.494501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.476 [2024-07-15 10:08:08.494511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:109072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.476 [2024-07-15 10:08:08.494519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.476 [2024-07-15 10:08:08.494529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:109080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.476 [2024-07-15 10:08:08.494539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.476 [2024-07-15 10:08:08.494549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:109088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.476 [2024-07-15 10:08:08.494557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.476 [2024-07-15 10:08:08.494566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:109096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.476 [2024-07-15 10:08:08.494574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.477 [2024-07-15 10:08:08.494583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:109104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.477 [2024-07-15 10:08:08.494595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.477 [2024-07-15 10:08:08.494605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:109112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.477 [2024-07-15 10:08:08.494613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.477 [2024-07-15 10:08:08.494622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:109120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.477 [2024-07-15 10:08:08.494630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.477 [2024-07-15 10:08:08.494640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:109128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.477 [2024-07-15 10:08:08.494648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.477 [2024-07-15 10:08:08.494657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:109136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.477 [2024-07-15 10:08:08.494673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.477 [2024-07-15 10:08:08.494684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:109144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.477 [2024-07-15 10:08:08.494693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.477 [2024-07-15 10:08:08.494702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:109152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.477 [2024-07-15 10:08:08.494710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.477 [2024-07-15 10:08:08.494719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:109160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.477 [2024-07-15 10:08:08.494728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.477 [2024-07-15 10:08:08.494738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:109168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.477 [2024-07-15 10:08:08.494746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.477 [2024-07-15 10:08:08.494756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:109176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.477 [2024-07-15 10:08:08.494764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.477 [2024-07-15 10:08:08.494774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:109184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.477 [2024-07-15 10:08:08.494782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.477 [2024-07-15 10:08:08.494792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:109192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.477 [2024-07-15 10:08:08.494800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.477 [2024-07-15 10:08:08.494810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:109200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.477 [2024-07-15 10:08:08.494818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.477 [2024-07-15 10:08:08.494831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:109208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.477 [2024-07-15 10:08:08.494841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.477 [2024-07-15 10:08:08.494851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.477 [2024-07-15 10:08:08.494859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.477 [2024-07-15 10:08:08.494868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:109224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.477 [2024-07-15 10:08:08.494877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.477 [2024-07-15 10:08:08.494886] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4ec90 is same with the state(5) to be set 00:27:09.477 [2024-07-15 10:08:08.494896] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.477 [2024-07-15 10:08:08.494902] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.477 [2024-07-15 10:08:08.494908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109232 len:8 PRP1 0x0 PRP2 0x0 00:27:09.477 [2024-07-15 10:08:08.494916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.477 [2024-07-15 10:08:08.494960] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf4ec90 was disconnected and freed. reset controller. 00:27:09.477 [2024-07-15 10:08:08.494971] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:27:09.477 [2024-07-15 10:08:08.495014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:09.477 [2024-07-15 10:08:08.495025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.477 [2024-07-15 10:08:08.495034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:09.477 [2024-07-15 10:08:08.495044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.477 [2024-07-15 10:08:08.495053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:09.477 [2024-07-15 10:08:08.495061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.477 [2024-07-15 10:08:08.495069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:09.477 [2024-07-15 10:08:08.495078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.477 [2024-07-15 10:08:08.495086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:09.477 [2024-07-15 10:08:08.497893] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:09.477 [2024-07-15 10:08:08.497927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2e30 (9): Bad file descriptor 00:27:09.477 [2024-07-15 10:08:08.533466] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:09.477 [2024-07-15 10:08:11.967832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:59120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.477 [2024-07-15 10:08:11.967878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.477 [2024-07-15 10:08:11.967921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:59128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.477 [2024-07-15 10:08:11.967932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.477 [2024-07-15 10:08:11.967943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:59136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.477 [2024-07-15 10:08:11.967953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.477 [2024-07-15 10:08:11.967963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:59144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.477 [2024-07-15 10:08:11.967973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.477 [2024-07-15 10:08:11.967983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:59152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.477 [2024-07-15 10:08:11.967993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.477 [2024-07-15 10:08:11.968004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:59528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.477 [2024-07-15 10:08:11.968013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.477 [2024-07-15 10:08:11.968023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:59536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.477 [2024-07-15 10:08:11.968032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.477 [2024-07-15 10:08:11.968043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:59544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.477 [2024-07-15 10:08:11.968052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.477 [2024-07-15 10:08:11.968063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:59552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.477 [2024-07-15 10:08:11.968072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.477 [2024-07-15 10:08:11.968082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:59560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.477 [2024-07-15 10:08:11.968091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.477 [2024-07-15 10:08:11.968102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:59568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.477 [2024-07-15 10:08:11.968111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.477 [2024-07-15 10:08:11.968122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:59576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.477 [2024-07-15 10:08:11.968131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.477 [2024-07-15 10:08:11.968142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:59584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.477 [2024-07-15 10:08:11.968151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.477 [2024-07-15 10:08:11.968162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:59592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.477 [2024-07-15 10:08:11.968176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.477 [2024-07-15 10:08:11.968187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:59600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.477 [2024-07-15 10:08:11.968196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.477 [2024-07-15 10:08:11.968207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:59608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.477 [2024-07-15 10:08:11.968216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.477 [2024-07-15 10:08:11.968226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:59616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.477 [2024-07-15 10:08:11.968237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.477 [2024-07-15 10:08:11.968248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:59624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.478 [2024-07-15 10:08:11.968256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.478 [2024-07-15 10:08:11.968267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:59632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.478 [2024-07-15 10:08:11.968276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.478 [2024-07-15 10:08:11.968287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:59640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.478 [2024-07-15 10:08:11.968296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.478 [2024-07-15 10:08:11.968307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:59648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.478 [2024-07-15 10:08:11.968316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.478 [2024-07-15 10:08:11.968327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:59656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.478 [2024-07-15 10:08:11.968336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.478 [2024-07-15 10:08:11.968355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:59664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.478 [2024-07-15 10:08:11.968364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.478 [2024-07-15 10:08:11.968391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:59672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.478 [2024-07-15 10:08:11.968401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.478 [2024-07-15 10:08:11.968413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:59680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.478 [2024-07-15 10:08:11.968423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.478 [2024-07-15 10:08:11.968434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:59688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.478 [2024-07-15 10:08:11.968444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.478 [2024-07-15 10:08:11.968455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:59696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.478 [2024-07-15 10:08:11.968470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.478 [2024-07-15 10:08:11.968483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:59704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.478 [2024-07-15 10:08:11.968492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.478 [2024-07-15 10:08:11.968504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:59712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.478 [2024-07-15 10:08:11.968513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.478 [2024-07-15 10:08:11.968525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:59720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.478 [2024-07-15 10:08:11.968535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.478 [2024-07-15 10:08:11.968547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:59728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.478 [2024-07-15 10:08:11.968556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.478 [2024-07-15 10:08:11.968568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:59736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.478 [2024-07-15 10:08:11.968577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.478 [2024-07-15 10:08:11.968589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:59744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.478 [2024-07-15 10:08:11.968602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.478 [2024-07-15 10:08:11.968614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:59752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.478 [2024-07-15 10:08:11.968624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.478 [2024-07-15 10:08:11.968635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:59760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.478 [2024-07-15 10:08:11.968645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.478 [2024-07-15 10:08:11.968656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:59768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.478 [2024-07-15 10:08:11.968666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.478 [2024-07-15 10:08:11.968686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:59776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.478 [2024-07-15 10:08:11.968697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.478 [2024-07-15 10:08:11.968709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:59784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.478 [2024-07-15 10:08:11.968718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.478 [2024-07-15 10:08:11.968730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:59792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.478 [2024-07-15 10:08:11.968739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.478 [2024-07-15 10:08:11.968756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:59800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.478 [2024-07-15 10:08:11.968766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.478 [2024-07-15 10:08:11.968777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:59808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.478 [2024-07-15 10:08:11.968787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.478 [2024-07-15 10:08:11.968799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:59816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.478 [2024-07-15 10:08:11.968808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.478 [2024-07-15 10:08:11.968819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:59824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.478 [2024-07-15 10:08:11.968829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.478 [2024-07-15 10:08:11.968841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:59832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.478 [2024-07-15 10:08:11.968850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.478 [2024-07-15 10:08:11.968862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:59840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.478 [2024-07-15 10:08:11.968871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.478 [2024-07-15 10:08:11.968883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:59848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.478 [2024-07-15 10:08:11.968893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.478 [2024-07-15 10:08:11.968904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:59856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.478 [2024-07-15 10:08:11.968914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.478 [2024-07-15 10:08:11.968926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:59864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.478 [2024-07-15 10:08:11.968935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.478 [2024-07-15 10:08:11.968947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:59160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.478 [2024-07-15 10:08:11.968958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.478 [2024-07-15 10:08:11.968970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:59168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.478 [2024-07-15 10:08:11.968979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.478 [2024-07-15 10:08:11.968990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:59176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.478 [2024-07-15 10:08:11.969001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.478 [2024-07-15 10:08:11.969012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:59184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.478 [2024-07-15 10:08:11.969027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.478 [2024-07-15 10:08:11.969039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:59192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.479 [2024-07-15 10:08:11.969048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.479 [2024-07-15 10:08:11.969060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:59200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.479 [2024-07-15 10:08:11.969070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.479 [2024-07-15 10:08:11.969081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:59208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.479 [2024-07-15 10:08:11.969091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.479 [2024-07-15 10:08:11.969102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:59216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.479 [2024-07-15 10:08:11.969112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.479 [2024-07-15 10:08:11.969124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.479 [2024-07-15 10:08:11.969133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.479 [2024-07-15 10:08:11.969145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:59232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.479 [2024-07-15 10:08:11.969154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.479 [2024-07-15 10:08:11.969166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:59240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.479 [2024-07-15 10:08:11.969175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.479 [2024-07-15 10:08:11.969187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:59248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.479 [2024-07-15 10:08:11.969196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.479 [2024-07-15 10:08:11.969208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:59256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.479 [2024-07-15 10:08:11.969218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.479 [2024-07-15 10:08:11.969229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:59264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.479 [2024-07-15 10:08:11.969240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.479 [2024-07-15 10:08:11.969251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:59272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.479 [2024-07-15 10:08:11.969261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.479 [2024-07-15 10:08:11.969273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:59280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.479 [2024-07-15 10:08:11.969283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.479 [2024-07-15 10:08:11.969298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:59288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.479 [2024-07-15 10:08:11.969310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.479 [2024-07-15 10:08:11.969322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:59296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.479 [2024-07-15 10:08:11.969332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.479 [2024-07-15 10:08:11.969344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:59304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.479 [2024-07-15 10:08:11.969354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.479 [2024-07-15 10:08:11.969365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:59312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.479 [2024-07-15 10:08:11.969375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.479 [2024-07-15 10:08:11.969386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:59320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.479 [2024-07-15 10:08:11.969396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.479 [2024-07-15 10:08:11.969408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:59328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.479 [2024-07-15 10:08:11.969417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.479 [2024-07-15 10:08:11.969429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:59336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.479 [2024-07-15 10:08:11.969438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.479 [2024-07-15 10:08:11.969450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:59344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.479 [2024-07-15 10:08:11.969459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.479 [2024-07-15 10:08:11.969471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:59352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.479 [2024-07-15 10:08:11.969480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.479 [2024-07-15 10:08:11.969492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:59360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.479 [2024-07-15 10:08:11.969501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.479 [2024-07-15 10:08:11.969513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:59368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.479 [2024-07-15 10:08:11.969523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.479 [2024-07-15 10:08:11.969534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:59376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.479 [2024-07-15 10:08:11.969543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.479 [2024-07-15 10:08:11.969555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:59384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.479 [2024-07-15 10:08:11.969565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.479 [2024-07-15 10:08:11.969592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:59392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.479 [2024-07-15 10:08:11.969601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.479 [2024-07-15 10:08:11.969612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:59400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.479 [2024-07-15 10:08:11.969621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.479 [2024-07-15 10:08:11.969632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:59408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.479 [2024-07-15 10:08:11.969641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.479 [2024-07-15 10:08:11.969651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:59416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.479 [2024-07-15 10:08:11.969662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.479 [2024-07-15 10:08:11.969680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:59424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.479 [2024-07-15 10:08:11.969690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.479 [2024-07-15 10:08:11.969701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:59432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.479 [2024-07-15 10:08:11.969710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.479 [2024-07-15 10:08:11.969720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:59440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.479 [2024-07-15 10:08:11.969730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.479 [2024-07-15 10:08:11.969741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:59448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.479 [2024-07-15 10:08:11.969750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.479 [2024-07-15 10:08:11.969761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:59456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.479 [2024-07-15 10:08:11.969771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.479 [2024-07-15 10:08:11.969782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:59464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.479 [2024-07-15 10:08:11.969791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.479 [2024-07-15 10:08:11.969802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:59472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.479 [2024-07-15 10:08:11.969811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.479 [2024-07-15 10:08:11.969822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:59480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.479 [2024-07-15 10:08:11.969831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.479 [2024-07-15 10:08:11.969853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:59488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.479 [2024-07-15 10:08:11.969865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.479 [2024-07-15 10:08:11.969875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:59496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.479 [2024-07-15 10:08:11.969883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.479 [2024-07-15 10:08:11.969892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:59504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.479 [2024-07-15 10:08:11.969900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.479 [2024-07-15 10:08:11.969910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:59512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.479 [2024-07-15 10:08:11.969918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.479 [2024-07-15 10:08:11.969928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:59520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.479 [2024-07-15 10:08:11.969936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.479 [2024-07-15 10:08:11.969945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:59872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.479 [2024-07-15 10:08:11.969953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.480 [2024-07-15 10:08:11.969963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:59880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.480 [2024-07-15 10:08:11.969971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.480 [2024-07-15 10:08:11.969980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:59888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.480 [2024-07-15 10:08:11.969990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.480 [2024-07-15 10:08:11.969999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:59896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.480 [2024-07-15 10:08:11.970007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.480 [2024-07-15 10:08:11.970017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:59904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.480 [2024-07-15 10:08:11.970026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.480 [2024-07-15 10:08:11.970035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:59912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.480 [2024-07-15 10:08:11.970043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.480 [2024-07-15 10:08:11.970052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:59920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.480 [2024-07-15 10:08:11.970060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.480 [2024-07-15 10:08:11.970070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:59928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.480 [2024-07-15 10:08:11.970078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.480 [2024-07-15 10:08:11.970091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:59936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.480 [2024-07-15 10:08:11.970099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.480 [2024-07-15 10:08:11.970109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:59944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.480 [2024-07-15 10:08:11.970117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.480 [2024-07-15 10:08:11.970126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:59952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.480 [2024-07-15 10:08:11.970134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.480 [2024-07-15 10:08:11.970144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:59960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.480 [2024-07-15 10:08:11.970152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.480 [2024-07-15 10:08:11.970161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:59968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.480 [2024-07-15 10:08:11.970169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.480 [2024-07-15 10:08:11.970179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:59976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.480 [2024-07-15 10:08:11.970187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.480 [2024-07-15 10:08:11.970196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:59984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.480 [2024-07-15 10:08:11.970204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.480 [2024-07-15 10:08:11.970214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:59992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.480 [2024-07-15 10:08:11.970222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.480 [2024-07-15 10:08:11.970231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:60000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.480 [2024-07-15 10:08:11.970240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.480 [2024-07-15 10:08:11.970249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:60008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.480 [2024-07-15 10:08:11.970257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.480 [2024-07-15 10:08:11.970266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:60016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.480 [2024-07-15 10:08:11.970276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.480 [2024-07-15 10:08:11.970285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:60024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.480 [2024-07-15 10:08:11.970293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.480 [2024-07-15 10:08:11.970303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:60032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.480 [2024-07-15 10:08:11.970315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.480 [2024-07-15 10:08:11.970324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:60040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.480 [2024-07-15 10:08:11.970332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.480 [2024-07-15 10:08:11.970342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:60048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.480 [2024-07-15 10:08:11.970350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.480 [2024-07-15 10:08:11.970359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:60056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.480 [2024-07-15 10:08:11.970367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.480 [2024-07-15 10:08:11.970377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:60064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.480 [2024-07-15 10:08:11.970390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.480 [2024-07-15 10:08:11.970399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:60072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.480 [2024-07-15 10:08:11.970407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.480 [2024-07-15 10:08:11.970417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.480 [2024-07-15 10:08:11.970425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.480 [2024-07-15 10:08:11.970434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:60088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.480 [2024-07-15 10:08:11.970443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.480 [2024-07-15 10:08:11.970452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:60096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.480 [2024-07-15 10:08:11.970460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.480 [2024-07-15 10:08:11.970470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:60104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.480 [2024-07-15 10:08:11.970478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.480 [2024-07-15 10:08:11.970487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:60112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.480 [2024-07-15 10:08:11.970495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.480 [2024-07-15 10:08:11.970505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:60120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.480 [2024-07-15 10:08:11.970512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.480 [2024-07-15 10:08:11.970538] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.480 [2024-07-15 10:08:11.970546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60128 len:8 PRP1 0x0 PRP2 0x0 00:27:09.480 [2024-07-15 10:08:11.970554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.480 [2024-07-15 10:08:11.970569] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.480 [2024-07-15 10:08:11.970576] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.480 [2024-07-15 10:08:11.970584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60136 len:8 PRP1 0x0 PRP2 0x0 00:27:09.480 [2024-07-15 10:08:11.970592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.480 [2024-07-15 10:08:11.970633] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf50d90 was disconnected and freed. reset controller. 00:27:09.480 [2024-07-15 10:08:11.970645] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:27:09.480 [2024-07-15 10:08:11.970694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:09.480 [2024-07-15 10:08:11.970705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.480 [2024-07-15 10:08:11.970715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:09.480 [2024-07-15 10:08:11.970723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.480 [2024-07-15 10:08:11.970731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:09.480 [2024-07-15 10:08:11.970739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.480 [2024-07-15 10:08:11.970749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:09.480 [2024-07-15 10:08:11.970757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.480 [2024-07-15 10:08:11.970766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:09.480 [2024-07-15 10:08:11.973833] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:09.480 [2024-07-15 10:08:11.973869] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2e30 (9): Bad file descriptor 00:27:09.480 [2024-07-15 10:08:12.004358] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:09.480 [2024-07-15 10:08:16.380506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.480 [2024-07-15 10:08:16.380551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.480 [2024-07-15 10:08:16.380570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.480 [2024-07-15 10:08:16.380580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.481 [2024-07-15 10:08:16.380592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:96328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.481 [2024-07-15 10:08:16.380602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.481 [2024-07-15 10:08:16.380613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:96336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.481 [2024-07-15 10:08:16.380622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.481 [2024-07-15 10:08:16.380633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.481 [2024-07-15 10:08:16.380665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.481 [2024-07-15 10:08:16.380685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:96352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.481 [2024-07-15 10:08:16.380695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.481 [2024-07-15 10:08:16.380706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.481 [2024-07-15 10:08:16.380715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.481 [2024-07-15 10:08:16.380726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:96368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.481 [2024-07-15 10:08:16.380735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.481 [2024-07-15 10:08:16.380746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:96376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.481 [2024-07-15 10:08:16.380755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.481 [2024-07-15 10:08:16.380765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.481 [2024-07-15 10:08:16.380774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.481 [2024-07-15 10:08:16.380785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:96392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.481 [2024-07-15 10:08:16.380794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.481 [2024-07-15 10:08:16.380805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:96400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.481 [2024-07-15 10:08:16.380814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.481 [2024-07-15 10:08:16.380824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:96408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.481 [2024-07-15 10:08:16.380833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.481 [2024-07-15 10:08:16.380844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.481 [2024-07-15 10:08:16.380853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.481 [2024-07-15 10:08:16.380864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.481 [2024-07-15 10:08:16.380873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.481 [2024-07-15 10:08:16.380884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.481 [2024-07-15 10:08:16.380893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.481 [2024-07-15 10:08:16.380903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.481 [2024-07-15 10:08:16.380914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.481 [2024-07-15 10:08:16.380931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.481 [2024-07-15 10:08:16.380941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.481 [2024-07-15 10:08:16.380952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.481 [2024-07-15 10:08:16.380962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.481 [2024-07-15 10:08:16.380972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.481 [2024-07-15 10:08:16.380982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.481 [2024-07-15 10:08:16.380993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.481 [2024-07-15 10:08:16.381002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.481 [2024-07-15 10:08:16.381013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.481 [2024-07-15 10:08:16.381022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.481 [2024-07-15 10:08:16.381033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.481 [2024-07-15 10:08:16.381042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.481 [2024-07-15 10:08:16.381053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.481 [2024-07-15 10:08:16.381062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.481 [2024-07-15 10:08:16.381073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.481 [2024-07-15 10:08:16.381082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.481 [2024-07-15 10:08:16.381092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.481 [2024-07-15 10:08:16.381101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.481 [2024-07-15 10:08:16.381112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.481 [2024-07-15 10:08:16.381121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.481 [2024-07-15 10:08:16.381132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.481 [2024-07-15 10:08:16.381141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.481 [2024-07-15 10:08:16.381152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.481 [2024-07-15 10:08:16.381161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.481 [2024-07-15 10:08:16.381172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.481 [2024-07-15 10:08:16.381185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.481 [2024-07-15 10:08:16.381197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.481 [2024-07-15 10:08:16.381206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.481 [2024-07-15 10:08:16.381217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.481 [2024-07-15 10:08:16.381227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.481 [2024-07-15 10:08:16.381237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.481 [2024-07-15 10:08:16.381250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.481 [2024-07-15 10:08:16.381261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.481 [2024-07-15 10:08:16.381270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.481 [2024-07-15 10:08:16.381281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.481 [2024-07-15 10:08:16.381290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.481 [2024-07-15 10:08:16.381301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.481 [2024-07-15 10:08:16.381310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.481 [2024-07-15 10:08:16.381321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.481 [2024-07-15 10:08:16.381331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.481 [2024-07-15 10:08:16.381342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.481 [2024-07-15 10:08:16.381352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.481 [2024-07-15 10:08:16.381362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.481 [2024-07-15 10:08:16.381371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.481 [2024-07-15 10:08:16.381383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.481 [2024-07-15 10:08:16.381392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.481 [2024-07-15 10:08:16.381402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.481 [2024-07-15 10:08:16.381411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.481 [2024-07-15 10:08:16.381422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.481 [2024-07-15 10:08:16.381431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.481 [2024-07-15 10:08:16.381442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.481 [2024-07-15 10:08:16.381459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.481 [2024-07-15 10:08:16.381470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.481 [2024-07-15 10:08:16.381479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.481 [2024-07-15 10:08:16.381489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.482 [2024-07-15 10:08:16.381499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.482 [2024-07-15 10:08:16.381510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.482 [2024-07-15 10:08:16.381519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.482 [2024-07-15 10:08:16.381530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.482 [2024-07-15 10:08:16.381539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.482 [2024-07-15 10:08:16.381549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.482 [2024-07-15 10:08:16.381558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.482 [2024-07-15 10:08:16.381569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.482 [2024-07-15 10:08:16.381580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.482 [2024-07-15 10:08:16.381603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.482 [2024-07-15 10:08:16.381611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.482 [2024-07-15 10:08:16.381621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.482 [2024-07-15 10:08:16.381629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.482 [2024-07-15 10:08:16.381638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.482 [2024-07-15 10:08:16.381646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.482 [2024-07-15 10:08:16.381656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.482 [2024-07-15 10:08:16.381676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.482 [2024-07-15 10:08:16.381686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.482 [2024-07-15 10:08:16.381694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.482 [2024-07-15 10:08:16.381704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.482 [2024-07-15 10:08:16.381712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.482 [2024-07-15 10:08:16.381726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.482 [2024-07-15 10:08:16.381734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.482 [2024-07-15 10:08:16.381744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.482 [2024-07-15 10:08:16.381752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.482 [2024-07-15 10:08:16.381761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.482 [2024-07-15 10:08:16.381770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.482 [2024-07-15 10:08:16.381779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.482 [2024-07-15 10:08:16.381787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.482 [2024-07-15 10:08:16.381797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.482 [2024-07-15 10:08:16.381805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.482 [2024-07-15 10:08:16.381814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.482 [2024-07-15 10:08:16.381823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.482 [2024-07-15 10:08:16.381845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.482 [2024-07-15 10:08:16.381853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96800 len:8 PRP1 0x0 PRP2 0x0 00:27:09.482 [2024-07-15 10:08:16.381861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.482 [2024-07-15 10:08:16.381872] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.482 [2024-07-15 10:08:16.381878] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.482 [2024-07-15 10:08:16.381885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96808 len:8 PRP1 0x0 PRP2 0x0 00:27:09.482 [2024-07-15 10:08:16.381892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.482 [2024-07-15 10:08:16.381902] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.482 [2024-07-15 10:08:16.381908] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.482 [2024-07-15 10:08:16.381914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96816 len:8 PRP1 0x0 PRP2 0x0 00:27:09.482 [2024-07-15 10:08:16.381921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.482 [2024-07-15 10:08:16.381930] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.482 [2024-07-15 10:08:16.381936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.482 [2024-07-15 10:08:16.381942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96824 len:8 PRP1 0x0 PRP2 0x0 00:27:09.482 [2024-07-15 10:08:16.381950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.482 [2024-07-15 10:08:16.381959] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.482 [2024-07-15 10:08:16.381969] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.482 [2024-07-15 10:08:16.381976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96832 len:8 PRP1 0x0 PRP2 0x0 00:27:09.482 [2024-07-15 10:08:16.381984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.482 [2024-07-15 10:08:16.381992] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.482 [2024-07-15 10:08:16.381997] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.482 [2024-07-15 10:08:16.382003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96840 len:8 PRP1 0x0 PRP2 0x0 00:27:09.482 [2024-07-15 10:08:16.382011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.482 [2024-07-15 10:08:16.382019] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.482 [2024-07-15 10:08:16.382025] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.482 [2024-07-15 10:08:16.382031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96848 len:8 PRP1 0x0 PRP2 0x0 00:27:09.482 [2024-07-15 10:08:16.382038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.482 [2024-07-15 10:08:16.382046] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.482 [2024-07-15 10:08:16.382052] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.482 [2024-07-15 10:08:16.382058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96856 len:8 PRP1 0x0 PRP2 0x0 00:27:09.482 [2024-07-15 10:08:16.382066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.482 [2024-07-15 10:08:16.382074] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.482 [2024-07-15 10:08:16.382079] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.482 [2024-07-15 10:08:16.382085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96864 len:8 PRP1 0x0 PRP2 0x0 00:27:09.482 [2024-07-15 10:08:16.382093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.482 [2024-07-15 10:08:16.382102] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.482 [2024-07-15 10:08:16.382107] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.482 [2024-07-15 10:08:16.382113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96872 len:8 PRP1 0x0 PRP2 0x0 00:27:09.482 [2024-07-15 10:08:16.382121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.482 [2024-07-15 10:08:16.382131] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.482 [2024-07-15 10:08:16.382136] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.482 [2024-07-15 10:08:16.382142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96880 len:8 PRP1 0x0 PRP2 0x0 00:27:09.482 [2024-07-15 10:08:16.382150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.482 [2024-07-15 10:08:16.382158] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.482 [2024-07-15 10:08:16.382164] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.482 [2024-07-15 10:08:16.382170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96888 len:8 PRP1 0x0 PRP2 0x0 00:27:09.482 [2024-07-15 10:08:16.382178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.483 [2024-07-15 10:08:16.382192] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.483 [2024-07-15 10:08:16.382198] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.483 [2024-07-15 10:08:16.382204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96896 len:8 PRP1 0x0 PRP2 0x0 00:27:09.483 [2024-07-15 10:08:16.382212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.483 [2024-07-15 10:08:16.382220] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.483 [2024-07-15 10:08:16.382225] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.483 [2024-07-15 10:08:16.382231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96904 len:8 PRP1 0x0 PRP2 0x0 00:27:09.483 [2024-07-15 10:08:16.382239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.483 [2024-07-15 10:08:16.382247] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.483 [2024-07-15 10:08:16.382253] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.483 [2024-07-15 10:08:16.382259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96912 len:8 PRP1 0x0 PRP2 0x0 00:27:09.483 [2024-07-15 10:08:16.382266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.483 [2024-07-15 10:08:16.382274] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.483 [2024-07-15 10:08:16.382280] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.483 [2024-07-15 10:08:16.382286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96920 len:8 PRP1 0x0 PRP2 0x0 00:27:09.483 [2024-07-15 10:08:16.382294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.483 [2024-07-15 10:08:16.382302] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.483 [2024-07-15 10:08:16.382307] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.483 [2024-07-15 10:08:16.382313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96928 len:8 PRP1 0x0 PRP2 0x0 00:27:09.483 [2024-07-15 10:08:16.382321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.483 [2024-07-15 10:08:16.382329] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.483 [2024-07-15 10:08:16.382335] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.483 [2024-07-15 10:08:16.382341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96936 len:8 PRP1 0x0 PRP2 0x0 00:27:09.483 [2024-07-15 10:08:16.382349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.483 [2024-07-15 10:08:16.382359] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.483 [2024-07-15 10:08:16.382364] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.483 [2024-07-15 10:08:16.382371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96944 len:8 PRP1 0x0 PRP2 0x0 00:27:09.483 [2024-07-15 10:08:16.382378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.483 [2024-07-15 10:08:16.382386] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.483 [2024-07-15 10:08:16.382392] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.483 [2024-07-15 10:08:16.382398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96952 len:8 PRP1 0x0 PRP2 0x0 00:27:09.483 [2024-07-15 10:08:16.382410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.483 [2024-07-15 10:08:16.382419] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.483 [2024-07-15 10:08:16.382425] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.483 [2024-07-15 10:08:16.382431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96960 len:8 PRP1 0x0 PRP2 0x0 00:27:09.483 [2024-07-15 10:08:16.382439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.483 [2024-07-15 10:08:16.382447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.483 [2024-07-15 10:08:16.382452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.483 [2024-07-15 10:08:16.382458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96968 len:8 PRP1 0x0 PRP2 0x0 00:27:09.483 [2024-07-15 10:08:16.382466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.483 [2024-07-15 10:08:16.382474] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.483 [2024-07-15 10:08:16.382480] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.483 [2024-07-15 10:08:16.382485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96976 len:8 PRP1 0x0 PRP2 0x0 00:27:09.483 [2024-07-15 10:08:16.382493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.483 [2024-07-15 10:08:16.382501] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.483 [2024-07-15 10:08:16.382507] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.483 [2024-07-15 10:08:16.382513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96984 len:8 PRP1 0x0 PRP2 0x0 00:27:09.483 [2024-07-15 10:08:16.382520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.483 [2024-07-15 10:08:16.382529] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.483 [2024-07-15 10:08:16.382534] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.483 [2024-07-15 10:08:16.382540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96992 len:8 PRP1 0x0 PRP2 0x0 00:27:09.483 [2024-07-15 10:08:16.382548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.483 [2024-07-15 10:08:16.382556] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.483 [2024-07-15 10:08:16.382561] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.483 [2024-07-15 10:08:16.382567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97000 len:8 PRP1 0x0 PRP2 0x0 00:27:09.483 [2024-07-15 10:08:16.382575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.483 [2024-07-15 10:08:16.382584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.483 [2024-07-15 10:08:16.382590] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.483 [2024-07-15 10:08:16.382596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97008 len:8 PRP1 0x0 PRP2 0x0 00:27:09.483 [2024-07-15 10:08:16.382604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.483 [2024-07-15 10:08:16.382613] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.483 [2024-07-15 10:08:16.382618] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.483 [2024-07-15 10:08:16.382628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97016 len:8 PRP1 0x0 PRP2 0x0 00:27:09.483 [2024-07-15 10:08:16.382636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.483 [2024-07-15 10:08:16.382645] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.483 [2024-07-15 10:08:16.382651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.483 [2024-07-15 10:08:16.382657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97024 len:8 PRP1 0x0 PRP2 0x0 00:27:09.483 [2024-07-15 10:08:16.382680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.483 [2024-07-15 10:08:16.382688] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.483 [2024-07-15 10:08:16.382694] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.483 [2024-07-15 10:08:16.382700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97032 len:8 PRP1 0x0 PRP2 0x0 00:27:09.483 [2024-07-15 10:08:16.382708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.483 [2024-07-15 10:08:16.382716] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.483 [2024-07-15 10:08:16.382721] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.483 [2024-07-15 10:08:16.382727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97040 len:8 PRP1 0x0 PRP2 0x0 00:27:09.483 [2024-07-15 10:08:16.382735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.483 [2024-07-15 10:08:16.382743] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.483 [2024-07-15 10:08:16.382749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.483 [2024-07-15 10:08:16.382755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97048 len:8 PRP1 0x0 PRP2 0x0 00:27:09.483 [2024-07-15 10:08:16.382763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.483 [2024-07-15 10:08:16.382771] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.483 [2024-07-15 10:08:16.382777] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.483 [2024-07-15 10:08:16.382783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97056 len:8 PRP1 0x0 PRP2 0x0 00:27:09.483 [2024-07-15 10:08:16.382790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.483 [2024-07-15 10:08:16.382799] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.483 [2024-07-15 10:08:16.382804] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.483 [2024-07-15 10:08:16.382810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97064 len:8 PRP1 0x0 PRP2 0x0 00:27:09.483 [2024-07-15 10:08:16.382818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.483 [2024-07-15 10:08:16.382827] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.483 [2024-07-15 10:08:16.382833] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.483 [2024-07-15 10:08:16.382839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97072 len:8 PRP1 0x0 PRP2 0x0 00:27:09.483 [2024-07-15 10:08:16.382847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.483 [2024-07-15 10:08:16.382859] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.483 [2024-07-15 10:08:16.382865] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.483 [2024-07-15 10:08:16.382871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97080 len:8 PRP1 0x0 PRP2 0x0 00:27:09.483 [2024-07-15 10:08:16.382879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.483 [2024-07-15 10:08:16.382889] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.483 [2024-07-15 10:08:16.382895] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.483 [2024-07-15 10:08:16.382900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97088 len:8 PRP1 0x0 PRP2 0x0 00:27:09.483 [2024-07-15 10:08:16.382908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.484 [2024-07-15 10:08:16.382916] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.484 [2024-07-15 10:08:16.382922] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.484 [2024-07-15 10:08:16.382928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97096 len:8 PRP1 0x0 PRP2 0x0 00:27:09.484 [2024-07-15 10:08:16.382936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.484 [2024-07-15 10:08:16.382944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.484 [2024-07-15 10:08:16.382949] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.484 [2024-07-15 10:08:16.382955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97104 len:8 PRP1 0x0 PRP2 0x0 00:27:09.484 [2024-07-15 10:08:16.382963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.484 [2024-07-15 10:08:16.382971] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.484 [2024-07-15 10:08:16.382977] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.484 [2024-07-15 10:08:16.404693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97112 len:8 PRP1 0x0 PRP2 0x0 00:27:09.484 [2024-07-15 10:08:16.404744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.484 [2024-07-15 10:08:16.404768] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.484 [2024-07-15 10:08:16.404780] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.484 [2024-07-15 10:08:16.404792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97120 len:8 PRP1 0x0 PRP2 0x0 00:27:09.484 [2024-07-15 10:08:16.404807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.484 [2024-07-15 10:08:16.404823] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.484 [2024-07-15 10:08:16.404833] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.484 [2024-07-15 10:08:16.404844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97128 len:8 PRP1 0x0 PRP2 0x0 00:27:09.484 [2024-07-15 10:08:16.404859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.484 [2024-07-15 10:08:16.404876] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.484 [2024-07-15 10:08:16.404886] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.484 [2024-07-15 10:08:16.404898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97136 len:8 PRP1 0x0 PRP2 0x0 00:27:09.484 [2024-07-15 10:08:16.404933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.484 [2024-07-15 10:08:16.404950] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.484 [2024-07-15 10:08:16.404960] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.484 [2024-07-15 10:08:16.404971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97144 len:8 PRP1 0x0 PRP2 0x0 00:27:09.484 [2024-07-15 10:08:16.404987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.484 [2024-07-15 10:08:16.405003] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.484 [2024-07-15 10:08:16.405013] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.484 [2024-07-15 10:08:16.405024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97152 len:8 PRP1 0x0 PRP2 0x0 00:27:09.484 [2024-07-15 10:08:16.405039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.484 [2024-07-15 10:08:16.405054] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.484 [2024-07-15 10:08:16.405065] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.484 [2024-07-15 10:08:16.405076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97160 len:8 PRP1 0x0 PRP2 0x0 00:27:09.484 [2024-07-15 10:08:16.405090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.484 [2024-07-15 10:08:16.405105] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.484 [2024-07-15 10:08:16.405116] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.484 [2024-07-15 10:08:16.405127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97168 len:8 PRP1 0x0 PRP2 0x0 00:27:09.484 [2024-07-15 10:08:16.405141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.484 [2024-07-15 10:08:16.405156] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.484 [2024-07-15 10:08:16.405166] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.484 [2024-07-15 10:08:16.405177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97176 len:8 PRP1 0x0 PRP2 0x0 00:27:09.484 [2024-07-15 10:08:16.405192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.484 [2024-07-15 10:08:16.405207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.484 [2024-07-15 10:08:16.405217] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.484 [2024-07-15 10:08:16.405228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97184 len:8 PRP1 0x0 PRP2 0x0 00:27:09.484 [2024-07-15 10:08:16.405242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.484 [2024-07-15 10:08:16.405258] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.484 [2024-07-15 10:08:16.405268] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.484 [2024-07-15 10:08:16.405280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97192 len:8 PRP1 0x0 PRP2 0x0 00:27:09.484 [2024-07-15 10:08:16.405294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.484 [2024-07-15 10:08:16.405309] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.484 [2024-07-15 10:08:16.405319] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.484 [2024-07-15 10:08:16.405340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97200 len:8 PRP1 0x0 PRP2 0x0 00:27:09.484 [2024-07-15 10:08:16.405355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.484 [2024-07-15 10:08:16.405370] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.484 [2024-07-15 10:08:16.405380] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.484 [2024-07-15 10:08:16.405392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97208 len:8 PRP1 0x0 PRP2 0x0 00:27:09.484 [2024-07-15 10:08:16.405406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.484 [2024-07-15 10:08:16.405421] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.484 [2024-07-15 10:08:16.405432] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.484 [2024-07-15 10:08:16.405442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97216 len:8 PRP1 0x0 PRP2 0x0 00:27:09.484 [2024-07-15 10:08:16.405457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.484 [2024-07-15 10:08:16.405472] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.484 [2024-07-15 10:08:16.405482] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.484 [2024-07-15 10:08:16.405493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97224 len:8 PRP1 0x0 PRP2 0x0 00:27:09.484 [2024-07-15 10:08:16.405508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.484 [2024-07-15 10:08:16.405523] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.484 [2024-07-15 10:08:16.405533] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.484 [2024-07-15 10:08:16.405544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97232 len:8 PRP1 0x0 PRP2 0x0 00:27:09.484 [2024-07-15 10:08:16.405558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.484 [2024-07-15 10:08:16.405573] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.484 [2024-07-15 10:08:16.405583] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.484 [2024-07-15 10:08:16.405594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97240 len:8 PRP1 0x0 PRP2 0x0 00:27:09.484 [2024-07-15 10:08:16.405609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.484 [2024-07-15 10:08:16.405624] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.484 [2024-07-15 10:08:16.405634] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.484 [2024-07-15 10:08:16.405645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97248 len:8 PRP1 0x0 PRP2 0x0 00:27:09.484 [2024-07-15 10:08:16.405679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.484 [2024-07-15 10:08:16.405696] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.484 [2024-07-15 10:08:16.405706] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.484 [2024-07-15 10:08:16.405718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97256 len:8 PRP1 0x0 PRP2 0x0 00:27:09.484 [2024-07-15 10:08:16.405732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.484 [2024-07-15 10:08:16.405748] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.484 [2024-07-15 10:08:16.405766] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.484 [2024-07-15 10:08:16.405777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97264 len:8 PRP1 0x0 PRP2 0x0 00:27:09.484 [2024-07-15 10:08:16.405792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.484 [2024-07-15 10:08:16.405807] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.484 [2024-07-15 10:08:16.405817] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.484 [2024-07-15 10:08:16.405828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97272 len:8 PRP1 0x0 PRP2 0x0 00:27:09.484 [2024-07-15 10:08:16.405843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.484 [2024-07-15 10:08:16.405857] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.484 [2024-07-15 10:08:16.405868] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.484 [2024-07-15 10:08:16.405879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97280 len:8 PRP1 0x0 PRP2 0x0 00:27:09.484 [2024-07-15 10:08:16.405894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.484 [2024-07-15 10:08:16.405909] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.484 [2024-07-15 10:08:16.405919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.484 [2024-07-15 10:08:16.405930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97288 len:8 PRP1 0x0 PRP2 0x0 00:27:09.484 [2024-07-15 10:08:16.405945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.485 [2024-07-15 10:08:16.405960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.485 [2024-07-15 10:08:16.405971] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.485 [2024-07-15 10:08:16.405982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97296 len:8 PRP1 0x0 PRP2 0x0 00:27:09.485 [2024-07-15 10:08:16.405996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.485 [2024-07-15 10:08:16.406012] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.485 [2024-07-15 10:08:16.406022] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.485 [2024-07-15 10:08:16.406033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97304 len:8 PRP1 0x0 PRP2 0x0 00:27:09.485 [2024-07-15 10:08:16.406048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.485 [2024-07-15 10:08:16.406063] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.485 [2024-07-15 10:08:16.406074] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.485 [2024-07-15 10:08:16.406085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97312 len:8 PRP1 0x0 PRP2 0x0 00:27:09.485 [2024-07-15 10:08:16.406100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.485 [2024-07-15 10:08:16.406115] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.485 [2024-07-15 10:08:16.406126] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.485 [2024-07-15 10:08:16.406137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97320 len:8 PRP1 0x0 PRP2 0x0 00:27:09.485 [2024-07-15 10:08:16.406151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.485 [2024-07-15 10:08:16.406174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.485 [2024-07-15 10:08:16.406184] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.485 [2024-07-15 10:08:16.406195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97328 len:8 PRP1 0x0 PRP2 0x0 00:27:09.485 [2024-07-15 10:08:16.406210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.485 [2024-07-15 10:08:16.406283] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf50b80 was disconnected and freed. reset controller. 00:27:09.485 [2024-07-15 10:08:16.406303] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:27:09.485 [2024-07-15 10:08:16.406387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:09.485 [2024-07-15 10:08:16.406407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.485 [2024-07-15 10:08:16.406426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:09.485 [2024-07-15 10:08:16.406441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.485 [2024-07-15 10:08:16.406457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:09.485 [2024-07-15 10:08:16.406472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.485 [2024-07-15 10:08:16.406488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:09.485 [2024-07-15 10:08:16.406503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.485 [2024-07-15 10:08:16.406519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:09.485 [2024-07-15 10:08:16.406585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2e30 (9): Bad file descriptor 00:27:09.485 [2024-07-15 10:08:16.411820] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:09.485 [2024-07-15 10:08:16.442236] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:09.485 00:27:09.485 Latency(us) 00:27:09.485 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:09.485 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:09.485 Verification LBA range: start 0x0 length 0x4000 00:27:09.485 NVMe0n1 : 15.01 11839.69 46.25 263.66 0.00 10553.99 436.43 32968.33 00:27:09.485 =================================================================================================================== 00:27:09.485 Total : 11839.69 46.25 263.66 0.00 10553.99 436.43 32968.33 00:27:09.485 Received shutdown signal, test time was about 15.000000 seconds 00:27:09.485 00:27:09.485 Latency(us) 00:27:09.485 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:09.485 =================================================================================================================== 00:27:09.485 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:09.485 10:08:22 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:27:09.485 10:08:22 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:27:09.485 10:08:22 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:27:09.485 10:08:22 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=88153 00:27:09.485 10:08:22 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:27:09.485 10:08:22 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 88153 /var/tmp/bdevperf.sock 00:27:09.485 10:08:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 88153 ']' 00:27:09.485 10:08:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:09.485 10:08:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:09.485 10:08:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:09.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:09.485 10:08:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:09.485 10:08:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:10.052 10:08:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:10.052 10:08:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:27:10.052 10:08:23 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:10.311 [2024-07-15 10:08:23.684888] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:10.311 10:08:23 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:10.311 [2024-07-15 10:08:23.864668] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:27:10.311 10:08:23 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:10.569 NVMe0n1 00:27:10.569 10:08:24 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:10.828 00:27:10.828 10:08:24 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:11.087 00:27:11.087 10:08:24 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:11.087 10:08:24 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:27:11.347 10:08:24 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:11.605 10:08:25 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:27:14.915 10:08:28 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:14.915 10:08:28 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:27:14.915 10:08:28 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=88291 00:27:14.915 10:08:28 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:14.915 10:08:28 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 88291 00:27:15.853 0 00:27:15.853 10:08:29 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:27:15.853 [2024-07-15 10:08:22.681282] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:27:15.853 [2024-07-15 10:08:22.681361] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88153 ] 00:27:15.853 [2024-07-15 10:08:22.804468] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:15.853 [2024-07-15 10:08:22.917499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:15.853 [2024-07-15 10:08:25.044708] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:27:15.853 [2024-07-15 10:08:25.044813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:15.853 [2024-07-15 10:08:25.044830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.853 [2024-07-15 10:08:25.044843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:15.853 [2024-07-15 10:08:25.044853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.853 [2024-07-15 10:08:25.044863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:15.853 [2024-07-15 10:08:25.044872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.853 [2024-07-15 10:08:25.044882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:15.853 [2024-07-15 10:08:25.044890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.853 [2024-07-15 10:08:25.044899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.853 [2024-07-15 10:08:25.044933] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.853 [2024-07-15 10:08:25.044952] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2234e30 (9): Bad file descriptor 00:27:15.853 [2024-07-15 10:08:25.053459] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:15.853 Running I/O for 1 seconds... 00:27:15.853 00:27:15.853 Latency(us) 00:27:15.853 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:15.853 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:15.854 Verification LBA range: start 0x0 length 0x4000 00:27:15.854 NVMe0n1 : 1.01 11825.81 46.19 0.00 0.00 10773.90 1495.31 12992.73 00:27:15.854 =================================================================================================================== 00:27:15.854 Total : 11825.81 46.19 0.00 0.00 10773.90 1495.31 12992.73 00:27:15.854 10:08:29 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:15.854 10:08:29 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:27:16.114 10:08:29 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:16.373 10:08:29 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:16.373 10:08:29 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:27:16.632 10:08:30 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:16.892 10:08:30 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:27:20.194 10:08:33 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:20.194 10:08:33 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:27:20.194 10:08:33 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 88153 00:27:20.194 10:08:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 88153 ']' 00:27:20.194 10:08:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 88153 00:27:20.194 10:08:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:27:20.194 10:08:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:20.194 10:08:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88153 00:27:20.194 10:08:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:20.194 10:08:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:20.194 10:08:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88153' 00:27:20.194 killing process with pid 88153 00:27:20.194 10:08:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 88153 00:27:20.194 10:08:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 88153 00:27:20.194 10:08:33 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:27:20.194 10:08:33 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:20.452 10:08:33 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:27:20.452 10:08:33 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:27:20.452 10:08:33 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:27:20.452 10:08:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:20.452 10:08:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:27:20.452 10:08:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:20.452 10:08:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:27:20.452 10:08:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:20.452 10:08:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:20.452 rmmod nvme_tcp 00:27:20.452 rmmod nvme_fabrics 00:27:20.452 rmmod nvme_keyring 00:27:20.452 10:08:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:20.452 10:08:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:27:20.452 10:08:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:27:20.452 10:08:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 87796 ']' 00:27:20.452 10:08:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 87796 00:27:20.452 10:08:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 87796 ']' 00:27:20.452 10:08:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 87796 00:27:20.452 10:08:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:27:20.452 10:08:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:20.452 10:08:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87796 00:27:20.452 10:08:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:20.452 10:08:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:20.452 killing process with pid 87796 00:27:20.452 10:08:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87796' 00:27:20.452 10:08:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 87796 00:27:20.452 10:08:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 87796 00:27:20.709 10:08:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:20.709 10:08:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:20.709 10:08:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:20.709 10:08:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:20.709 10:08:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:20.709 10:08:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:20.709 10:08:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:20.709 10:08:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:20.709 10:08:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:27:20.709 ************************************ 00:27:20.709 END TEST nvmf_failover 00:27:20.709 ************************************ 00:27:20.709 00:27:20.709 real 0m31.198s 00:27:20.709 user 2m1.117s 00:27:20.709 sys 0m3.769s 00:27:20.709 10:08:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:20.709 10:08:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:20.968 10:08:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:20.968 10:08:34 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:27:20.968 10:08:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:20.968 10:08:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:20.968 10:08:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:20.968 ************************************ 00:27:20.968 START TEST nvmf_host_discovery 00:27:20.968 ************************************ 00:27:20.968 10:08:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:27:20.968 * Looking for test storage... 00:27:20.968 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:20.968 10:08:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:20.968 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:27:20.968 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:20.968 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:20.968 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:20.968 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:20.968 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:20.968 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:20.968 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:20.968 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:20.968 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:20.968 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:20.968 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:27:20.968 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:27:20.968 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:20.968 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:20.968 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:20.968 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:20.968 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:20.968 10:08:34 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:20.968 10:08:34 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:20.968 10:08:34 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:20.968 10:08:34 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.968 10:08:34 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.968 10:08:34 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.968 10:08:34 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:27:20.968 10:08:34 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.968 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:27:20.968 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:20.968 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:20.968 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:20.968 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:20.968 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:20.968 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:20.968 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:20.968 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:20.968 10:08:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:27:20.968 10:08:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:27:20.968 10:08:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:27:20.968 10:08:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:27:20.968 10:08:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:27:20.968 10:08:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:27:20.969 10:08:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:27:20.969 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:20.969 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:20.969 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:20.969 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:20.969 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:20.969 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:20.969 10:08:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:20.969 10:08:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:20.969 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:27:20.969 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:27:20.969 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:27:20.969 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:27:20.969 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:27:20.969 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:27:20.969 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:20.969 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:20.969 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:20.969 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:27:20.969 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:20.969 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:20.969 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:20.969 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:20.969 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:20.969 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:20.969 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:20.969 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:20.969 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:27:20.969 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:27:21.228 Cannot find device "nvmf_tgt_br" 00:27:21.228 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:27:21.228 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:27:21.228 Cannot find device "nvmf_tgt_br2" 00:27:21.228 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:27:21.228 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:27:21.228 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:27:21.228 Cannot find device "nvmf_tgt_br" 00:27:21.228 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:27:21.228 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:27:21.228 Cannot find device "nvmf_tgt_br2" 00:27:21.228 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:27:21.228 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:27:21.228 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:27:21.228 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:21.228 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:21.228 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:27:21.228 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:21.228 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:21.228 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:27:21.228 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:27:21.228 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:21.228 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:21.228 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:21.228 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:21.228 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:21.228 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:21.228 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:21.228 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:21.228 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:27:21.228 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:27:21.228 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:27:21.228 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:27:21.228 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:21.228 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:21.228 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:21.228 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:27:21.228 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:27:21.487 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:27:21.487 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:21.487 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:21.487 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:21.487 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:21.487 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:27:21.487 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:21.487 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:27:21.487 00:27:21.487 --- 10.0.0.2 ping statistics --- 00:27:21.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:21.487 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:27:21.487 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:27:21.487 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:21.487 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:27:21.487 00:27:21.487 --- 10.0.0.3 ping statistics --- 00:27:21.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:21.487 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:27:21.487 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:21.487 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:21.487 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:27:21.487 00:27:21.487 --- 10.0.0.1 ping statistics --- 00:27:21.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:21.487 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:27:21.487 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:21.487 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:27:21.487 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:21.487 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:21.487 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:21.487 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:21.487 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:21.487 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:21.487 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:21.487 10:08:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:27:21.487 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:21.487 10:08:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:21.487 10:08:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:21.487 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=88589 00:27:21.487 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:21.487 10:08:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 88589 00:27:21.487 10:08:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 88589 ']' 00:27:21.487 10:08:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:21.487 10:08:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:21.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:21.487 10:08:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:21.487 10:08:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:21.487 10:08:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:21.487 [2024-07-15 10:08:34.982246] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:27:21.487 [2024-07-15 10:08:34.982686] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:21.746 [2024-07-15 10:08:35.114370] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:21.746 [2024-07-15 10:08:35.213613] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:21.746 [2024-07-15 10:08:35.213673] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:21.746 [2024-07-15 10:08:35.213680] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:21.746 [2024-07-15 10:08:35.213685] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:21.746 [2024-07-15 10:08:35.213689] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:21.746 [2024-07-15 10:08:35.213708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:22.313 10:08:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:22.313 10:08:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:27:22.313 10:08:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:22.313 10:08:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:22.313 10:08:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:22.313 10:08:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:22.313 10:08:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:22.313 10:08:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.313 10:08:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:22.313 [2024-07-15 10:08:35.877253] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:22.314 10:08:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.314 10:08:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:27:22.314 10:08:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.314 10:08:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:22.314 [2024-07-15 10:08:35.889313] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:22.314 10:08:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.314 10:08:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:27:22.314 10:08:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.314 10:08:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:22.573 null0 00:27:22.573 10:08:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.573 10:08:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:27:22.573 10:08:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.573 10:08:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:22.573 null1 00:27:22.573 10:08:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.574 10:08:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:27:22.574 10:08:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.574 10:08:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:22.574 10:08:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.574 10:08:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=88639 00:27:22.574 10:08:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:27:22.574 10:08:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 88639 /tmp/host.sock 00:27:22.574 10:08:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 88639 ']' 00:27:22.574 10:08:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:27:22.574 10:08:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:22.574 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:22.574 10:08:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:22.574 10:08:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:22.574 10:08:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:22.574 [2024-07-15 10:08:35.983441] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:27:22.574 [2024-07-15 10:08:35.983530] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88639 ] 00:27:22.574 [2024-07-15 10:08:36.119937] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:22.833 [2024-07-15 10:08:36.222604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:23.402 10:08:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:23.402 10:08:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:27:23.402 10:08:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:23.402 10:08:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:27:23.402 10:08:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.402 10:08:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.402 10:08:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.402 10:08:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:27:23.402 10:08:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.402 10:08:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.402 10:08:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.402 10:08:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:27:23.402 10:08:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:27:23.402 10:08:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:23.402 10:08:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:23.402 10:08:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.402 10:08:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:23.402 10:08:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.402 10:08:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:23.402 10:08:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.402 10:08:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:27:23.402 10:08:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:27:23.402 10:08:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:23.402 10:08:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:23.402 10:08:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:23.402 10:08:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.402 10:08:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:23.402 10:08:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.402 10:08:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.402 10:08:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:27:23.402 10:08:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:27:23.402 10:08:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.402 10:08:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.402 10:08:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.402 10:08:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:27:23.402 10:08:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:23.402 10:08:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:23.402 10:08:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:23.402 10:08:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.402 10:08:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:23.402 10:08:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.662 10:08:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.662 10:08:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:27:23.662 10:08:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:27:23.662 10:08:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:23.662 10:08:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:23.662 10:08:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:23.662 10:08:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.662 10:08:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.662 10:08:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:23.662 10:08:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.662 10:08:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:27:23.662 10:08:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:27:23.662 10:08:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.662 10:08:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.662 10:08:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.662 10:08:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:27:23.662 10:08:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:23.662 10:08:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:23.662 10:08:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:23.662 10:08:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.662 10:08:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.662 10:08:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:23.662 10:08:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.662 10:08:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:27:23.662 10:08:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:27:23.662 10:08:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:23.662 10:08:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.662 10:08:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.662 10:08:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:23.662 10:08:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:23.662 10:08:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:23.662 10:08:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.662 10:08:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:27:23.662 10:08:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:23.662 10:08:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.662 10:08:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.662 [2024-07-15 10:08:37.223025] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:23.662 10:08:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.662 10:08:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:27:23.662 10:08:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:23.662 10:08:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.662 10:08:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.662 10:08:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:23.662 10:08:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:23.662 10:08:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:23.662 10:08:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.923 10:08:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:27:23.923 10:08:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:27:23.923 10:08:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:23.923 10:08:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:23.923 10:08:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.923 10:08:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.923 10:08:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:23.923 10:08:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:23.923 10:08:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.923 10:08:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:27:23.923 10:08:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:27:23.923 10:08:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:27:23.923 10:08:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:23.923 10:08:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:23.923 10:08:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:27:23.923 10:08:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:27:23.923 10:08:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:23.923 10:08:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:27:23.923 10:08:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:23.923 10:08:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:27:23.923 10:08:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.923 10:08:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.923 10:08:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.923 10:08:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:27:23.923 10:08:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:27:23.923 10:08:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:27:23.923 10:08:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:27:23.923 10:08:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:27:23.923 10:08:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.923 10:08:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.923 10:08:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.923 10:08:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:23.923 10:08:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:23.923 10:08:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:27:23.923 10:08:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:27:23.923 10:08:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:23.923 10:08:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:27:23.923 10:08:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:23.923 10:08:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:23.923 10:08:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:23.923 10:08:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.923 10:08:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:23.923 10:08:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.923 10:08:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.923 10:08:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:27:23.923 10:08:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:27:24.492 [2024-07-15 10:08:37.855989] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:24.492 [2024-07-15 10:08:37.856023] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:24.492 [2024-07-15 10:08:37.856036] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:24.492 [2024-07-15 10:08:37.941949] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:24.492 [2024-07-15 10:08:37.998375] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:24.492 [2024-07-15 10:08:37.998413] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:25.061 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:27:25.061 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:25.061 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:27:25.061 10:08:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:25.061 10:08:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:25.061 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.061 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:25.061 10:08:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:25.061 10:08:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:25.061 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.061 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.061 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:27:25.061 10:08:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:27:25.061 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:27:25.061 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:27:25.061 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:27:25.061 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:27:25.061 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:27:25.061 10:08:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:25.061 10:08:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:25.061 10:08:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:25.061 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.061 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:25.061 10:08:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:25.061 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.061 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:27:25.061 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:27:25.062 10:08:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:27:25.062 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:27:25.062 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:27:25.062 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:27:25.062 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:27:25.062 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:27:25.062 10:08:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:25.062 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.062 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:25.062 10:08:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:25.062 10:08:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:25.062 10:08:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:25.062 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.062 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:27:25.062 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:27:25.062 10:08:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:27:25.062 10:08:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:27:25.062 10:08:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:25.062 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:25.062 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:27:25.062 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:27:25.062 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:25.062 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:27:25.062 10:08:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:27:25.062 10:08:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:25.062 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.062 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:25.062 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:25.321 [2024-07-15 10:08:38.788622] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:25.321 [2024-07-15 10:08:38.789789] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:25.321 [2024-07-15 10:08:38.789821] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:25.321 [2024-07-15 10:08:38.877051] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:27:25.321 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.581 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:25.581 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:27:25.581 10:08:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:27:25.581 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:27:25.581 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:27:25.581 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:27:25.581 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:27:25.581 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:27:25.581 10:08:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:25.581 10:08:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:25.581 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.581 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:25.581 10:08:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:25.581 10:08:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:25.581 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.581 [2024-07-15 10:08:38.937175] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:25.581 [2024-07-15 10:08:38.937201] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:25.581 [2024-07-15 10:08:38.937206] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:25.581 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:27:25.581 10:08:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:27:26.588 10:08:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:27:26.588 10:08:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:27:26.588 10:08:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:27:26.588 10:08:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:26.588 10:08:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:26.588 10:08:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.588 10:08:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:26.588 10:08:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:26.588 10:08:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:26.588 10:08:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.588 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:27:26.588 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:27:26.588 10:08:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:27:26.588 10:08:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:27:26.588 10:08:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:26.588 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:26.588 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:27:26.588 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:27:26.588 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:26.588 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:27:26.588 10:08:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:26.588 10:08:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:26.588 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.588 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:26.588 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.588 10:08:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:27:26.588 10:08:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:27:26.588 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:27:26.588 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:27:26.588 10:08:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:26.588 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.588 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:26.588 [2024-07-15 10:08:40.074739] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:26.588 [2024-07-15 10:08:40.074772] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:26.588 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.588 10:08:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:26.588 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:26.588 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:27:26.588 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:27:26.588 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:26.588 [2024-07-15 10:08:40.082251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:26.588 [2024-07-15 10:08:40.082282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.588 [2024-07-15 10:08:40.082292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:26.588 [2024-07-15 10:08:40.082298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.588 [2024-07-15 10:08:40.082305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:26.588 [2024-07-15 10:08:40.082312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.588 [2024-07-15 10:08:40.082319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:26.588 [2024-07-15 10:08:40.082325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.588 [2024-07-15 10:08:40.082331] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91bc50 is same with the state(5) to be set 00:27:26.588 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:27:26.588 10:08:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:26.588 10:08:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:26.588 10:08:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:26.588 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.588 10:08:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:26.588 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:26.588 [2024-07-15 10:08:40.092196] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91bc50 (9): Bad file descriptor 00:27:26.588 [2024-07-15 10:08:40.102194] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:26.588 [2024-07-15 10:08:40.102314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.588 [2024-07-15 10:08:40.102330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91bc50 with addr=10.0.0.2, port=4420 00:27:26.588 [2024-07-15 10:08:40.102338] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91bc50 is same with the state(5) to be set 00:27:26.588 [2024-07-15 10:08:40.102351] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91bc50 (9): Bad file descriptor 00:27:26.588 [2024-07-15 10:08:40.102362] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:26.588 [2024-07-15 10:08:40.102370] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:26.588 [2024-07-15 10:08:40.102377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:26.588 [2024-07-15 10:08:40.102388] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:26.588 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.588 [2024-07-15 10:08:40.112230] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:26.588 [2024-07-15 10:08:40.112295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.588 [2024-07-15 10:08:40.112307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91bc50 with addr=10.0.0.2, port=4420 00:27:26.588 [2024-07-15 10:08:40.112314] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91bc50 is same with the state(5) to be set 00:27:26.588 [2024-07-15 10:08:40.112324] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91bc50 (9): Bad file descriptor 00:27:26.588 [2024-07-15 10:08:40.112333] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:26.588 [2024-07-15 10:08:40.112338] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:26.588 [2024-07-15 10:08:40.112344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:26.588 [2024-07-15 10:08:40.112360] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:26.588 [2024-07-15 10:08:40.122249] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:26.588 [2024-07-15 10:08:40.122315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.588 [2024-07-15 10:08:40.122326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91bc50 with addr=10.0.0.2, port=4420 00:27:26.588 [2024-07-15 10:08:40.122332] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91bc50 is same with the state(5) to be set 00:27:26.588 [2024-07-15 10:08:40.122342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91bc50 (9): Bad file descriptor 00:27:26.588 [2024-07-15 10:08:40.122350] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:26.588 [2024-07-15 10:08:40.122355] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:26.588 [2024-07-15 10:08:40.122360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:26.588 [2024-07-15 10:08:40.122369] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:26.588 [2024-07-15 10:08:40.132273] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:26.588 [2024-07-15 10:08:40.132325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.589 [2024-07-15 10:08:40.132335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91bc50 with addr=10.0.0.2, port=4420 00:27:26.589 [2024-07-15 10:08:40.132341] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91bc50 is same with the state(5) to be set 00:27:26.589 [2024-07-15 10:08:40.132350] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91bc50 (9): Bad file descriptor 00:27:26.589 [2024-07-15 10:08:40.132363] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:26.589 [2024-07-15 10:08:40.132368] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:26.589 [2024-07-15 10:08:40.132373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:26.589 [2024-07-15 10:08:40.132399] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:26.589 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.589 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:27:26.589 10:08:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:26.589 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:26.589 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:27:26.589 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:27:26.589 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:26.589 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:27:26.589 10:08:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:26.589 10:08:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:26.589 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.589 [2024-07-15 10:08:40.142289] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:26.589 [2024-07-15 10:08:40.142343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.589 [2024-07-15 10:08:40.142354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91bc50 with addr=10.0.0.2, port=4420 00:27:26.589 [2024-07-15 10:08:40.142361] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91bc50 is same with the state(5) to be set 00:27:26.589 [2024-07-15 10:08:40.142371] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91bc50 (9): Bad file descriptor 00:27:26.589 [2024-07-15 10:08:40.142380] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:26.589 [2024-07-15 10:08:40.142386] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:26.589 [2024-07-15 10:08:40.142392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:26.589 [2024-07-15 10:08:40.142401] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:26.589 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:26.589 10:08:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:26.589 10:08:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:26.849 [2024-07-15 10:08:40.152308] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:26.849 [2024-07-15 10:08:40.152380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.849 [2024-07-15 10:08:40.152393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91bc50 with addr=10.0.0.2, port=4420 00:27:26.849 [2024-07-15 10:08:40.152399] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91bc50 is same with the state(5) to be set 00:27:26.849 [2024-07-15 10:08:40.152409] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x91bc50 (9): Bad file descriptor 00:27:26.849 [2024-07-15 10:08:40.152418] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:26.849 [2024-07-15 10:08:40.152423] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:26.849 [2024-07-15 10:08:40.152428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:26.849 [2024-07-15 10:08:40.152437] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:26.849 [2024-07-15 10:08:40.161315] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:27:26.849 [2024-07-15 10:08:40.161337] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:27:26.849 10:08:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:27:27.108 10:08:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:27.108 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:27.108 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:27:27.108 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:27:27.108 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:27.108 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:27:27.108 10:08:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:27.108 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.108 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:27.108 10:08:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:27.108 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.108 10:08:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:27:27.108 10:08:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:27:27.108 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:27:27.108 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:27:27.108 10:08:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:27.108 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.108 10:08:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:28.045 [2024-07-15 10:08:41.499289] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:28.045 [2024-07-15 10:08:41.499323] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:28.045 [2024-07-15 10:08:41.499336] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:28.045 [2024-07-15 10:08:41.585236] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:27:28.304 [2024-07-15 10:08:41.645128] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:28.304 [2024-07-15 10:08:41.645182] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:28.304 10:08:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.304 10:08:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:28.304 10:08:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:27:28.304 10:08:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:28.304 10:08:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:28.304 10:08:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:28.304 10:08:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:28.304 10:08:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:28.304 10:08:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:28.304 10:08:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.304 10:08:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:28.304 2024/07/15 10:08:41 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:27:28.304 request: 00:27:28.304 { 00:27:28.304 "method": "bdev_nvme_start_discovery", 00:27:28.304 "params": { 00:27:28.304 "name": "nvme", 00:27:28.304 "trtype": "tcp", 00:27:28.304 "traddr": "10.0.0.2", 00:27:28.304 "adrfam": "ipv4", 00:27:28.304 "trsvcid": "8009", 00:27:28.304 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:28.304 "wait_for_attach": true 00:27:28.304 } 00:27:28.304 } 00:27:28.304 Got JSON-RPC error response 00:27:28.304 GoRPCClient: error on JSON-RPC call 00:27:28.304 10:08:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:28.304 10:08:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:27:28.304 10:08:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:28.304 10:08:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:28.304 10:08:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:28.304 10:08:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:27:28.304 10:08:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:28.304 10:08:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:28.304 10:08:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.304 10:08:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:28.304 10:08:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:27:28.304 10:08:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:27:28.304 10:08:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.304 10:08:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:27:28.304 10:08:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:27:28.304 10:08:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:28.304 10:08:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:28.304 10:08:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:28.304 10:08:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.304 10:08:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:28.304 10:08:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:28.304 10:08:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.304 10:08:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:28.305 10:08:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:28.305 10:08:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:27:28.305 10:08:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:28.305 10:08:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:28.305 10:08:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:28.305 10:08:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:28.305 10:08:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:28.305 10:08:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:28.305 10:08:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.305 10:08:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:28.305 2024/07/15 10:08:41 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:27:28.305 request: 00:27:28.305 { 00:27:28.305 "method": "bdev_nvme_start_discovery", 00:27:28.305 "params": { 00:27:28.305 "name": "nvme_second", 00:27:28.305 "trtype": "tcp", 00:27:28.305 "traddr": "10.0.0.2", 00:27:28.305 "adrfam": "ipv4", 00:27:28.305 "trsvcid": "8009", 00:27:28.305 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:28.305 "wait_for_attach": true 00:27:28.305 } 00:27:28.305 } 00:27:28.305 Got JSON-RPC error response 00:27:28.305 GoRPCClient: error on JSON-RPC call 00:27:28.305 10:08:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:28.305 10:08:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:27:28.305 10:08:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:28.305 10:08:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:28.305 10:08:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:28.305 10:08:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:27:28.305 10:08:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:28.305 10:08:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.305 10:08:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:28.305 10:08:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:28.305 10:08:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:27:28.305 10:08:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:27:28.305 10:08:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.305 10:08:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:27:28.305 10:08:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:27:28.305 10:08:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:28.305 10:08:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.305 10:08:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:28.305 10:08:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:28.305 10:08:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:28.305 10:08:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:28.563 10:08:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.563 10:08:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:28.563 10:08:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:28.563 10:08:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:27:28.563 10:08:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:28.563 10:08:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:28.563 10:08:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:28.563 10:08:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:28.563 10:08:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:28.563 10:08:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:28.563 10:08:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.563 10:08:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:29.498 [2024-07-15 10:08:42.915854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.498 [2024-07-15 10:08:42.915915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x918540 with addr=10.0.0.2, port=8010 00:27:29.498 [2024-07-15 10:08:42.915932] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:29.498 [2024-07-15 10:08:42.915955] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:29.498 [2024-07-15 10:08:42.915962] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:27:30.435 [2024-07-15 10:08:43.913919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.435 [2024-07-15 10:08:43.913981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x918540 with addr=10.0.0.2, port=8010 00:27:30.435 [2024-07-15 10:08:43.913998] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:30.435 [2024-07-15 10:08:43.914004] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:30.435 [2024-07-15 10:08:43.914010] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:27:31.372 [2024-07-15 10:08:44.911878] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:27:31.372 2024/07/15 10:08:44 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp wait_for_attach:%!s(bool=false)], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:27:31.372 request: 00:27:31.372 { 00:27:31.372 "method": "bdev_nvme_start_discovery", 00:27:31.372 "params": { 00:27:31.372 "name": "nvme_second", 00:27:31.372 "trtype": "tcp", 00:27:31.372 "traddr": "10.0.0.2", 00:27:31.372 "adrfam": "ipv4", 00:27:31.372 "trsvcid": "8010", 00:27:31.372 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:31.372 "wait_for_attach": false, 00:27:31.372 "attach_timeout_ms": 3000 00:27:31.372 } 00:27:31.372 } 00:27:31.372 Got JSON-RPC error response 00:27:31.372 GoRPCClient: error on JSON-RPC call 00:27:31.372 10:08:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:31.372 10:08:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:27:31.372 10:08:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:31.372 10:08:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:31.372 10:08:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:31.372 10:08:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:27:31.372 10:08:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:31.372 10:08:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:31.372 10:08:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:27:31.372 10:08:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.372 10:08:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:31.372 10:08:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:27:31.372 10:08:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.631 10:08:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:27:31.631 10:08:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:27:31.631 10:08:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 88639 00:27:31.631 10:08:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:27:31.631 10:08:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:31.631 10:08:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:27:31.631 10:08:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:31.631 10:08:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:27:31.631 10:08:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:31.631 10:08:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:31.631 rmmod nvme_tcp 00:27:31.631 rmmod nvme_fabrics 00:27:31.631 rmmod nvme_keyring 00:27:31.631 10:08:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:31.631 10:08:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:27:31.631 10:08:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:27:31.631 10:08:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 88589 ']' 00:27:31.631 10:08:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 88589 00:27:31.631 10:08:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 88589 ']' 00:27:31.631 10:08:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 88589 00:27:31.631 10:08:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:27:31.631 10:08:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:31.631 10:08:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88589 00:27:31.631 10:08:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:31.631 10:08:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:31.631 10:08:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88589' 00:27:31.631 killing process with pid 88589 00:27:31.631 10:08:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 88589 00:27:31.631 10:08:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 88589 00:27:31.890 10:08:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:31.890 10:08:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:31.890 10:08:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:31.890 10:08:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:31.890 10:08:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:31.890 10:08:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:31.890 10:08:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:31.890 10:08:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:31.890 10:08:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:27:31.890 00:27:31.890 real 0m10.996s 00:27:31.890 user 0m21.357s 00:27:31.891 sys 0m1.801s 00:27:31.891 10:08:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:31.891 10:08:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:31.891 ************************************ 00:27:31.891 END TEST nvmf_host_discovery 00:27:31.891 ************************************ 00:27:31.891 10:08:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:31.891 10:08:45 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:27:31.891 10:08:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:31.891 10:08:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:31.891 10:08:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:31.891 ************************************ 00:27:31.891 START TEST nvmf_host_multipath_status 00:27:31.891 ************************************ 00:27:31.891 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:27:32.150 * Looking for test storage... 00:27:32.150 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:32.150 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:32.150 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:27:32.150 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:32.150 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:32.150 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:32.150 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:27:32.151 Cannot find device "nvmf_tgt_br" 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:27:32.151 Cannot find device "nvmf_tgt_br2" 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:27:32.151 Cannot find device "nvmf_tgt_br" 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:27:32.151 Cannot find device "nvmf_tgt_br2" 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:27:32.151 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:27:32.411 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:32.411 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:32.411 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:27:32.411 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:32.411 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:32.411 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:27:32.411 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:27:32.411 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:32.411 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:32.411 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:32.411 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:32.411 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:32.411 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:32.411 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:32.411 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:32.411 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:27:32.411 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:27:32.411 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:27:32.411 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:27:32.411 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:32.411 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:32.411 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:32.411 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:27:32.411 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:27:32.411 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:27:32.411 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:32.411 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:32.411 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:32.411 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:32.411 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:27:32.411 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:32.411 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:27:32.411 00:27:32.411 --- 10.0.0.2 ping statistics --- 00:27:32.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:32.411 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:27:32.411 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:27:32.411 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:32.411 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:27:32.411 00:27:32.411 --- 10.0.0.3 ping statistics --- 00:27:32.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:32.411 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:27:32.411 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:32.411 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:32.412 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:27:32.412 00:27:32.412 --- 10.0.0.1 ping statistics --- 00:27:32.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:32.412 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:27:32.412 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:32.412 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:27:32.412 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:32.412 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:32.412 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:32.412 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:32.412 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:32.412 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:32.412 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:32.412 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:27:32.412 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:32.412 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:32.412 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:32.412 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:27:32.412 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=89128 00:27:32.412 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 89128 00:27:32.412 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 89128 ']' 00:27:32.412 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:32.412 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:32.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:32.412 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:32.412 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:32.412 10:08:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:32.672 [2024-07-15 10:08:46.008798] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:27:32.672 [2024-07-15 10:08:46.008866] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:32.672 [2024-07-15 10:08:46.147654] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:32.672 [2024-07-15 10:08:46.250956] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:32.672 [2024-07-15 10:08:46.251004] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:32.672 [2024-07-15 10:08:46.251010] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:32.672 [2024-07-15 10:08:46.251015] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:32.672 [2024-07-15 10:08:46.251019] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:32.672 [2024-07-15 10:08:46.251254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:32.672 [2024-07-15 10:08:46.251256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:33.610 10:08:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:33.610 10:08:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:27:33.610 10:08:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:33.610 10:08:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:33.610 10:08:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:33.610 10:08:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:33.610 10:08:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=89128 00:27:33.610 10:08:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:33.610 [2024-07-15 10:08:47.086559] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:33.610 10:08:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:33.869 Malloc0 00:27:33.869 10:08:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:27:34.129 10:08:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:34.389 10:08:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:34.389 [2024-07-15 10:08:47.899182] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:34.389 10:08:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:34.648 [2024-07-15 10:08:48.106905] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:34.648 10:08:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=89226 00:27:34.648 10:08:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:27:34.648 10:08:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:34.648 10:08:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 89226 /var/tmp/bdevperf.sock 00:27:34.648 10:08:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 89226 ']' 00:27:34.648 10:08:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:34.648 10:08:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:34.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:34.648 10:08:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:34.648 10:08:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:34.648 10:08:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:35.587 10:08:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:35.587 10:08:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:27:35.587 10:08:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:27:35.846 10:08:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:27:36.105 Nvme0n1 00:27:36.105 10:08:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:36.364 Nvme0n1 00:27:36.364 10:08:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:27:36.364 10:08:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:27:38.902 10:08:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:27:38.902 10:08:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:38.902 10:08:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:38.902 10:08:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:27:39.841 10:08:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:27:39.841 10:08:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:39.841 10:08:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:39.841 10:08:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:40.160 10:08:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:40.160 10:08:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:40.160 10:08:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:40.160 10:08:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:40.160 10:08:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:40.160 10:08:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:40.160 10:08:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:40.160 10:08:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:40.429 10:08:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:40.429 10:08:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:40.429 10:08:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:40.429 10:08:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:40.689 10:08:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:40.689 10:08:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:40.689 10:08:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:40.689 10:08:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:40.689 10:08:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:40.689 10:08:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:40.689 10:08:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:40.948 10:08:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:40.948 10:08:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:40.948 10:08:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:27:40.948 10:08:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:41.208 10:08:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:41.468 10:08:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:27:42.407 10:08:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:27:42.407 10:08:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:42.407 10:08:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:42.407 10:08:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:42.666 10:08:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:42.666 10:08:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:42.666 10:08:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:42.666 10:08:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:42.926 10:08:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:42.926 10:08:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:42.926 10:08:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:42.926 10:08:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:42.926 10:08:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:42.926 10:08:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:42.926 10:08:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:42.926 10:08:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:43.185 10:08:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:43.185 10:08:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:43.185 10:08:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:43.185 10:08:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:43.445 10:08:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:43.445 10:08:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:43.445 10:08:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:43.445 10:08:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:43.727 10:08:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:43.727 10:08:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:27:43.727 10:08:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:43.727 10:08:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:43.986 10:08:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:27:44.924 10:08:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:27:44.924 10:08:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:44.924 10:08:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:44.924 10:08:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:45.184 10:08:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:45.184 10:08:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:45.184 10:08:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:45.184 10:08:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:45.444 10:08:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:45.444 10:08:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:45.444 10:08:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:45.444 10:08:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:45.704 10:08:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:45.704 10:08:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:45.704 10:08:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:45.704 10:08:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:45.704 10:08:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:45.704 10:08:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:45.704 10:08:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:45.704 10:08:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:45.964 10:08:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:45.964 10:08:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:45.964 10:08:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:45.964 10:08:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:46.223 10:08:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:46.223 10:08:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:27:46.223 10:08:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:46.482 10:08:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:46.483 10:09:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:27:47.861 10:09:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:27:47.861 10:09:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:47.861 10:09:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:47.861 10:09:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:47.861 10:09:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:47.861 10:09:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:47.861 10:09:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:47.861 10:09:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:48.120 10:09:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:48.120 10:09:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:48.120 10:09:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:48.120 10:09:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:48.120 10:09:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:48.120 10:09:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:48.120 10:09:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:48.120 10:09:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:48.379 10:09:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:48.379 10:09:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:48.379 10:09:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:48.379 10:09:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:48.638 10:09:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:48.638 10:09:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:48.638 10:09:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:48.638 10:09:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:48.897 10:09:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:48.897 10:09:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:27:48.897 10:09:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:48.897 10:09:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:49.156 10:09:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:27:50.143 10:09:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:27:50.143 10:09:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:50.143 10:09:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:50.143 10:09:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:50.402 10:09:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:50.402 10:09:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:50.402 10:09:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:50.402 10:09:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:50.659 10:09:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:50.659 10:09:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:50.659 10:09:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:50.659 10:09:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:50.918 10:09:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:50.918 10:09:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:50.918 10:09:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:50.918 10:09:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:50.918 10:09:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:50.918 10:09:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:50.918 10:09:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:50.918 10:09:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:51.177 10:09:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:51.177 10:09:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:51.177 10:09:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:51.177 10:09:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:51.436 10:09:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:51.436 10:09:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:27:51.436 10:09:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:51.696 10:09:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:51.696 10:09:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:27:52.634 10:09:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:27:52.634 10:09:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:52.892 10:09:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:52.892 10:09:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:52.892 10:09:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:52.892 10:09:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:52.892 10:09:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:52.892 10:09:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:53.152 10:09:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:53.152 10:09:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:53.152 10:09:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:53.152 10:09:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:53.411 10:09:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:53.411 10:09:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:53.411 10:09:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:53.411 10:09:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:53.724 10:09:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:53.724 10:09:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:53.724 10:09:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:53.724 10:09:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:53.724 10:09:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:53.724 10:09:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:53.724 10:09:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:53.724 10:09:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:53.983 10:09:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:53.983 10:09:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:27:54.243 10:09:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:27:54.243 10:09:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:54.503 10:09:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:54.503 10:09:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:27:55.883 10:09:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:27:55.883 10:09:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:55.883 10:09:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:55.883 10:09:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:55.883 10:09:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:55.883 10:09:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:55.883 10:09:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:55.883 10:09:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:55.883 10:09:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:55.883 10:09:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:55.883 10:09:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:55.883 10:09:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:56.142 10:09:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:56.142 10:09:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:56.142 10:09:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:56.142 10:09:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:56.401 10:09:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:56.401 10:09:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:56.401 10:09:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:56.401 10:09:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:56.661 10:09:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:56.661 10:09:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:56.661 10:09:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:56.661 10:09:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:56.920 10:09:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:56.921 10:09:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:27:56.921 10:09:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:56.921 10:09:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:57.183 10:09:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:27:58.120 10:09:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:27:58.120 10:09:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:58.120 10:09:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:58.120 10:09:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:58.379 10:09:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:58.379 10:09:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:58.379 10:09:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:58.379 10:09:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:58.636 10:09:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:58.636 10:09:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:58.636 10:09:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:58.636 10:09:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:58.895 10:09:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:58.895 10:09:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:58.895 10:09:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:58.895 10:09:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:58.895 10:09:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:58.895 10:09:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:58.895 10:09:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:58.895 10:09:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:59.159 10:09:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:59.159 10:09:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:59.159 10:09:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:59.159 10:09:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:59.418 10:09:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:59.418 10:09:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:27:59.418 10:09:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:59.677 10:09:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:59.677 10:09:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:28:01.082 10:09:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:28:01.082 10:09:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:01.082 10:09:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:01.082 10:09:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:01.082 10:09:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:01.082 10:09:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:01.082 10:09:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:01.082 10:09:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:01.082 10:09:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:01.082 10:09:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:01.082 10:09:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:01.082 10:09:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:01.342 10:09:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:01.342 10:09:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:01.342 10:09:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:01.342 10:09:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:01.602 10:09:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:01.602 10:09:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:01.602 10:09:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:01.602 10:09:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:01.862 10:09:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:01.862 10:09:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:01.862 10:09:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:01.862 10:09:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:02.121 10:09:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:02.121 10:09:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:28:02.121 10:09:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:02.121 10:09:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:28:02.379 10:09:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:28:03.317 10:09:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:28:03.317 10:09:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:03.317 10:09:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:03.317 10:09:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:03.576 10:09:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:03.576 10:09:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:03.576 10:09:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:03.576 10:09:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:03.835 10:09:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:03.835 10:09:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:03.835 10:09:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:03.835 10:09:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:04.095 10:09:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:04.095 10:09:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:04.095 10:09:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:04.095 10:09:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:04.355 10:09:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:04.355 10:09:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:04.355 10:09:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:04.355 10:09:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:04.355 10:09:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:04.355 10:09:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:04.355 10:09:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:04.355 10:09:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:04.615 10:09:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:04.615 10:09:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 89226 00:28:04.615 10:09:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 89226 ']' 00:28:04.615 10:09:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 89226 00:28:04.615 10:09:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:28:04.615 10:09:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:04.615 10:09:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89226 00:28:04.615 10:09:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:28:04.615 10:09:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:28:04.615 killing process with pid 89226 00:28:04.615 10:09:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89226' 00:28:04.615 10:09:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 89226 00:28:04.615 10:09:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 89226 00:28:04.875 Connection closed with partial response: 00:28:04.875 00:28:04.875 00:28:04.875 10:09:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 89226 00:28:04.875 10:09:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:28:04.875 [2024-07-15 10:08:48.164340] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:28:04.875 [2024-07-15 10:08:48.164434] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89226 ] 00:28:04.875 [2024-07-15 10:08:48.302591] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:04.875 [2024-07-15 10:08:48.407167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:04.875 Running I/O for 90 seconds... 00:28:04.875 [2024-07-15 10:09:02.428082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.875 [2024-07-15 10:09:02.428149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:04.875 [2024-07-15 10:09:02.428193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:6008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.875 [2024-07-15 10:09:02.428204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:04.875 [2024-07-15 10:09:02.428219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:6016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.875 [2024-07-15 10:09:02.428228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:04.875 [2024-07-15 10:09:02.428242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.875 [2024-07-15 10:09:02.428251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:04.875 [2024-07-15 10:09:02.428265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:6032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.876 [2024-07-15 10:09:02.428273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:04.876 [2024-07-15 10:09:02.428286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:6040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.876 [2024-07-15 10:09:02.428294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:04.876 [2024-07-15 10:09:02.428308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:6048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.876 [2024-07-15 10:09:02.428316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:04.876 [2024-07-15 10:09:02.428330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:6056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.876 [2024-07-15 10:09:02.428338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:04.876 [2024-07-15 10:09:02.428352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.876 [2024-07-15 10:09:02.428367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:04.876 [2024-07-15 10:09:02.428381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.876 [2024-07-15 10:09:02.428389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:04.876 [2024-07-15 10:09:02.428403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.876 [2024-07-15 10:09:02.428430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:04.876 [2024-07-15 10:09:02.428445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:6088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.876 [2024-07-15 10:09:02.428454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:04.876 [2024-07-15 10:09:02.428468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.876 [2024-07-15 10:09:02.428476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:04.876 [2024-07-15 10:09:02.428490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.876 [2024-07-15 10:09:02.428498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:04.876 [2024-07-15 10:09:02.428512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.876 [2024-07-15 10:09:02.428521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:04.876 [2024-07-15 10:09:02.428535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.876 [2024-07-15 10:09:02.428544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:04.876 [2024-07-15 10:09:02.428557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:6128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.876 [2024-07-15 10:09:02.428566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:04.876 [2024-07-15 10:09:02.428580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:6136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.876 [2024-07-15 10:09:02.428588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:04.876 [2024-07-15 10:09:02.428602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:6144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.876 [2024-07-15 10:09:02.428610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:04.876 [2024-07-15 10:09:02.428624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.876 [2024-07-15 10:09:02.428632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:04.876 [2024-07-15 10:09:02.428646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.876 [2024-07-15 10:09:02.428654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:04.876 [2024-07-15 10:09:02.428677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:6168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.876 [2024-07-15 10:09:02.428686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:04.876 [2024-07-15 10:09:02.428700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.876 [2024-07-15 10:09:02.428708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:04.876 [2024-07-15 10:09:02.428728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:6184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.876 [2024-07-15 10:09:02.428737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:04.876 [2024-07-15 10:09:02.430712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:6192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.876 [2024-07-15 10:09:02.430735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:04.876 [2024-07-15 10:09:02.430758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:6200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.876 [2024-07-15 10:09:02.430768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:04.876 [2024-07-15 10:09:02.430788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:6208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.876 [2024-07-15 10:09:02.430797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:04.876 [2024-07-15 10:09:02.430817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.876 [2024-07-15 10:09:02.430827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:04.876 [2024-07-15 10:09:02.430847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:6224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.876 [2024-07-15 10:09:02.430856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:04.876 [2024-07-15 10:09:02.430876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:6232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.876 [2024-07-15 10:09:02.430885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:04.876 [2024-07-15 10:09:02.430905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:6240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.876 [2024-07-15 10:09:02.430913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:04.876 [2024-07-15 10:09:02.430934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.876 [2024-07-15 10:09:02.430942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:04.876 [2024-07-15 10:09:02.430962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.876 [2024-07-15 10:09:02.430970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:04.876 [2024-07-15 10:09:02.430990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.876 [2024-07-15 10:09:02.430998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:04.876 [2024-07-15 10:09:02.431018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.876 [2024-07-15 10:09:02.431026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:04.876 [2024-07-15 10:09:02.431047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:6280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.876 [2024-07-15 10:09:02.431065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:04.876 [2024-07-15 10:09:02.431085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:6288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.876 [2024-07-15 10:09:02.431093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:04.876 [2024-07-15 10:09:02.431113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.876 [2024-07-15 10:09:02.431122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:04.876 [2024-07-15 10:09:02.431141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:6304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.876 [2024-07-15 10:09:02.431150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:04.876 [2024-07-15 10:09:02.431170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.876 [2024-07-15 10:09:02.431179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:04.876 [2024-07-15 10:09:02.431248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:6320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.876 [2024-07-15 10:09:02.431259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:04.876 [2024-07-15 10:09:02.431280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:5880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.876 [2024-07-15 10:09:02.431289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:04.876 [2024-07-15 10:09:02.431310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.876 [2024-07-15 10:09:02.431319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:04.876 [2024-07-15 10:09:02.431340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:5896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.876 [2024-07-15 10:09:02.431349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:04.876 [2024-07-15 10:09:02.431369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.876 [2024-07-15 10:09:02.431378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:04.876 [2024-07-15 10:09:02.431398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.876 [2024-07-15 10:09:02.431407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:04.877 [2024-07-15 10:09:02.431428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.877 [2024-07-15 10:09:02.431437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:04.877 [2024-07-15 10:09:02.431458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.877 [2024-07-15 10:09:02.431473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:04.877 [2024-07-15 10:09:02.431494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.877 [2024-07-15 10:09:02.431503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:04.877 [2024-07-15 10:09:02.431524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.877 [2024-07-15 10:09:02.431533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:04.877 [2024-07-15 10:09:02.431554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.877 [2024-07-15 10:09:02.431562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:04.877 [2024-07-15 10:09:02.431583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.877 [2024-07-15 10:09:02.431591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:04.877 [2024-07-15 10:09:02.431611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.877 [2024-07-15 10:09:02.431620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:04.877 [2024-07-15 10:09:02.431641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.877 [2024-07-15 10:09:02.431649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:04.877 [2024-07-15 10:09:02.431679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.877 [2024-07-15 10:09:02.431688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:04.877 [2024-07-15 10:09:02.431709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.877 [2024-07-15 10:09:02.431718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:04.877 [2024-07-15 10:09:15.851901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:88408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.877 [2024-07-15 10:09:15.851963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:04.877 [2024-07-15 10:09:15.852020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:88424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.877 [2024-07-15 10:09:15.852030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:04.877 [2024-07-15 10:09:15.852045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:88440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.877 [2024-07-15 10:09:15.852054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:04.877 [2024-07-15 10:09:15.852229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:88456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.877 [2024-07-15 10:09:15.852241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:04.877 [2024-07-15 10:09:15.852280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:88472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.877 [2024-07-15 10:09:15.852289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:04.877 [2024-07-15 10:09:15.852302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.877 [2024-07-15 10:09:15.852311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:04.877 [2024-07-15 10:09:15.852324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:87808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.877 [2024-07-15 10:09:15.852334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:04.877 [2024-07-15 10:09:15.852348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:87840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.877 [2024-07-15 10:09:15.852356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:04.877 [2024-07-15 10:09:15.852377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:87872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.877 [2024-07-15 10:09:15.852385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:04.877 [2024-07-15 10:09:15.852400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:88504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.877 [2024-07-15 10:09:15.852408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:04.877 [2024-07-15 10:09:15.852422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:88520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.877 [2024-07-15 10:09:15.852430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:04.877 [2024-07-15 10:09:15.852443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:88536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.877 [2024-07-15 10:09:15.852452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:04.877 [2024-07-15 10:09:15.852466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:88552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.877 [2024-07-15 10:09:15.852474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:04.877 [2024-07-15 10:09:15.852488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:88024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.877 [2024-07-15 10:09:15.852497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:04.877 [2024-07-15 10:09:15.852511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:88056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.877 [2024-07-15 10:09:15.852519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:04.877 [2024-07-15 10:09:15.852533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:88088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.877 [2024-07-15 10:09:15.852541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:04.877 [2024-07-15 10:09:15.852561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:88120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.877 [2024-07-15 10:09:15.852569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:04.877 [2024-07-15 10:09:15.852583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:88152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.877 [2024-07-15 10:09:15.852591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:04.877 [2024-07-15 10:09:15.852606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:88184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.877 [2024-07-15 10:09:15.852614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:04.877 [2024-07-15 10:09:15.852628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:87904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.877 [2024-07-15 10:09:15.852636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:04.877 [2024-07-15 10:09:15.852650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:87936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.877 [2024-07-15 10:09:15.852668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:04.877 [2024-07-15 10:09:15.852686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:87968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.877 [2024-07-15 10:09:15.852694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:04.877 [2024-07-15 10:09:15.852708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:88000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.877 [2024-07-15 10:09:15.852717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:04.877 [2024-07-15 10:09:15.853784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:88032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.877 [2024-07-15 10:09:15.853807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:04.877 [2024-07-15 10:09:15.853826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:88568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.877 [2024-07-15 10:09:15.853835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:04.877 [2024-07-15 10:09:15.853849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.877 [2024-07-15 10:09:15.853858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:04.877 [2024-07-15 10:09:15.853872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:88600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.877 [2024-07-15 10:09:15.853881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:04.877 [2024-07-15 10:09:15.853895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:88616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.877 [2024-07-15 10:09:15.853903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:04.877 [2024-07-15 10:09:15.853917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:88216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.877 [2024-07-15 10:09:15.853957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:04.877 [2024-07-15 10:09:15.853970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:88248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.877 [2024-07-15 10:09:15.853978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:04.877 [2024-07-15 10:09:15.853991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:88280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.877 [2024-07-15 10:09:15.853999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:04.877 [2024-07-15 10:09:15.854012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:88312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.877 [2024-07-15 10:09:15.854020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:04.878 [2024-07-15 10:09:15.854033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:88640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.878 [2024-07-15 10:09:15.854041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:04.878 [2024-07-15 10:09:15.854054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:88656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.878 [2024-07-15 10:09:15.854061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:04.878 [2024-07-15 10:09:15.854076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:88672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.878 [2024-07-15 10:09:15.854084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:04.878 [2024-07-15 10:09:15.854097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:88688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.878 [2024-07-15 10:09:15.854106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:04.878 [2024-07-15 10:09:15.854119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:88704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.878 [2024-07-15 10:09:15.854127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:04.878 [2024-07-15 10:09:15.854143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.878 [2024-07-15 10:09:15.854151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:04.878 [2024-07-15 10:09:15.854164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:88080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.878 [2024-07-15 10:09:15.854172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:04.878 [2024-07-15 10:09:15.854185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:88112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.878 [2024-07-15 10:09:15.854193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:04.878 [2024-07-15 10:09:15.854206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.878 [2024-07-15 10:09:15.854219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:04.878 [2024-07-15 10:09:15.854232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:88176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.878 [2024-07-15 10:09:15.854240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:04.878 [2024-07-15 10:09:15.854253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:88208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.878 [2024-07-15 10:09:15.854261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:04.878 [2024-07-15 10:09:15.854274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:88240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.878 [2024-07-15 10:09:15.854282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:04.878 [2024-07-15 10:09:15.854295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:88272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.878 [2024-07-15 10:09:15.854302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:04.878 [2024-07-15 10:09:15.854315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:88304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.878 [2024-07-15 10:09:15.854324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:04.878 [2024-07-15 10:09:15.854337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:88736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.878 [2024-07-15 10:09:15.854345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:04.878 Received shutdown signal, test time was about 28.228049 seconds 00:28:04.878 00:28:04.878 Latency(us) 00:28:04.878 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:04.878 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:04.878 Verification LBA range: start 0x0 length 0x4000 00:28:04.878 Nvme0n1 : 28.23 11240.31 43.91 0.00 0.00 11365.76 188.70 3018433.62 00:28:04.878 =================================================================================================================== 00:28:04.878 Total : 11240.31 43.91 0.00 0.00 11365.76 188.70 3018433.62 00:28:04.878 10:09:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:05.138 10:09:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:28:05.138 10:09:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:28:05.138 10:09:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:28:05.138 10:09:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:05.138 10:09:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:28:05.138 10:09:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:05.138 10:09:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:28:05.138 10:09:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:05.138 10:09:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:05.138 rmmod nvme_tcp 00:28:05.138 rmmod nvme_fabrics 00:28:05.138 rmmod nvme_keyring 00:28:05.138 10:09:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:05.138 10:09:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:28:05.138 10:09:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:28:05.138 10:09:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 89128 ']' 00:28:05.138 10:09:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 89128 00:28:05.138 10:09:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 89128 ']' 00:28:05.138 10:09:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 89128 00:28:05.138 10:09:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:28:05.138 10:09:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:05.138 10:09:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89128 00:28:05.138 10:09:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:05.138 10:09:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:05.138 killing process with pid 89128 00:28:05.138 10:09:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89128' 00:28:05.138 10:09:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 89128 00:28:05.138 10:09:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 89128 00:28:05.397 10:09:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:05.397 10:09:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:05.397 10:09:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:05.397 10:09:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:05.397 10:09:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:05.397 10:09:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:05.397 10:09:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:05.397 10:09:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:05.397 10:09:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:28:05.397 00:28:05.397 real 0m33.525s 00:28:05.397 user 1m48.342s 00:28:05.397 sys 0m7.403s 00:28:05.397 ************************************ 00:28:05.397 END TEST nvmf_host_multipath_status 00:28:05.397 ************************************ 00:28:05.397 10:09:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:05.397 10:09:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:05.658 10:09:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:05.658 10:09:18 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:05.658 10:09:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:05.658 10:09:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:05.658 10:09:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:05.658 ************************************ 00:28:05.658 START TEST nvmf_discovery_remove_ifc 00:28:05.658 ************************************ 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:05.658 * Looking for test storage... 00:28:05.658 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:28:05.658 Cannot find device "nvmf_tgt_br" 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:28:05.658 Cannot find device "nvmf_tgt_br2" 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:28:05.658 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:28:05.918 Cannot find device "nvmf_tgt_br" 00:28:05.918 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:28:05.918 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:28:05.918 Cannot find device "nvmf_tgt_br2" 00:28:05.918 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:28:05.918 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:28:05.918 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:28:05.918 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:05.918 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:05.918 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:28:05.918 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:05.918 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:05.918 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:28:05.918 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:28:05.918 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:05.918 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:05.918 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:05.918 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:05.918 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:05.918 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:05.918 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:28:05.918 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:28:05.918 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:28:05.918 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:28:05.918 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:28:05.918 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:28:05.918 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:05.919 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:05.919 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:05.919 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:28:05.919 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:28:05.919 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:28:05.919 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:05.919 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:05.919 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:05.919 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:05.919 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:28:05.919 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:05.919 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:28:05.919 00:28:05.919 --- 10.0.0.2 ping statistics --- 00:28:05.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:05.919 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:28:05.919 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:28:05.919 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:05.919 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:28:05.919 00:28:05.919 --- 10.0.0.3 ping statistics --- 00:28:05.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:05.919 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:28:05.919 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:05.919 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:05.919 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:28:05.919 00:28:05.919 --- 10.0.0.1 ping statistics --- 00:28:05.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:05.919 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:28:05.919 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:05.919 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:28:05.919 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:05.919 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:05.919 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:05.919 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:05.919 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:05.919 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:05.919 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:06.178 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:28:06.178 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:06.178 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:06.178 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:06.178 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=90477 00:28:06.178 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 90477 00:28:06.178 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 90477 ']' 00:28:06.178 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:06.178 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:06.178 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:06.178 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:06.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:06.178 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:06.178 10:09:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:06.178 [2024-07-15 10:09:19.593255] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:28:06.178 [2024-07-15 10:09:19.593332] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:06.178 [2024-07-15 10:09:19.721012] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:06.438 [2024-07-15 10:09:19.813804] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:06.438 [2024-07-15 10:09:19.813868] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:06.438 [2024-07-15 10:09:19.813874] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:06.438 [2024-07-15 10:09:19.813879] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:06.438 [2024-07-15 10:09:19.813883] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:06.438 [2024-07-15 10:09:19.813907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:07.006 10:09:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:07.006 10:09:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:28:07.006 10:09:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:07.006 10:09:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:07.006 10:09:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:07.006 10:09:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:07.006 10:09:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:28:07.006 10:09:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.006 10:09:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:07.006 [2024-07-15 10:09:20.492848] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:07.006 [2024-07-15 10:09:20.500914] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:28:07.006 null0 00:28:07.006 [2024-07-15 10:09:20.532802] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:07.006 10:09:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.006 10:09:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=90528 00:28:07.006 10:09:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:28:07.006 10:09:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 90528 /tmp/host.sock 00:28:07.006 10:09:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 90528 ']' 00:28:07.007 10:09:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:28:07.007 10:09:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:07.007 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:07.007 10:09:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:07.007 10:09:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:07.007 10:09:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:07.264 [2024-07-15 10:09:20.604764] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:28:07.264 [2024-07-15 10:09:20.604835] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90528 ] 00:28:07.264 [2024-07-15 10:09:20.741953] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:07.264 [2024-07-15 10:09:20.844080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:08.211 10:09:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:08.211 10:09:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:28:08.211 10:09:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:08.211 10:09:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:28:08.211 10:09:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.211 10:09:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:08.211 10:09:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.211 10:09:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:28:08.211 10:09:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.211 10:09:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:08.211 10:09:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.211 10:09:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:28:08.211 10:09:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.211 10:09:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:09.152 [2024-07-15 10:09:22.561549] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:09.152 [2024-07-15 10:09:22.561582] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:09.152 [2024-07-15 10:09:22.561594] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:09.152 [2024-07-15 10:09:22.647517] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:28:09.152 [2024-07-15 10:09:22.704029] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:09.152 [2024-07-15 10:09:22.704136] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:09.152 [2024-07-15 10:09:22.704159] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:09.152 [2024-07-15 10:09:22.704187] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:09.152 [2024-07-15 10:09:22.704210] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:09.152 10:09:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.152 10:09:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:28:09.152 10:09:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:09.152 [2024-07-15 10:09:22.709989] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x69c650 was disconnected and freed. delete nvme_qpair. 00:28:09.152 10:09:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:09.152 10:09:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:09.152 10:09:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.152 10:09:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:09.152 10:09:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:09.152 10:09:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:09.410 10:09:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.410 10:09:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:28:09.410 10:09:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:28:09.410 10:09:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:28:09.410 10:09:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:28:09.411 10:09:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:09.411 10:09:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:09.411 10:09:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.411 10:09:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:09.411 10:09:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:09.411 10:09:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:09.411 10:09:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:09.411 10:09:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.411 10:09:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:09.411 10:09:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:10.359 10:09:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:10.359 10:09:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:10.359 10:09:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.359 10:09:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:10.359 10:09:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:10.359 10:09:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:10.359 10:09:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:10.359 10:09:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.359 10:09:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:10.359 10:09:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:11.735 10:09:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:11.735 10:09:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:11.735 10:09:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:11.735 10:09:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.735 10:09:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:11.735 10:09:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:11.735 10:09:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:11.735 10:09:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.735 10:09:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:11.735 10:09:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:12.671 10:09:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:12.671 10:09:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:12.671 10:09:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:12.671 10:09:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.671 10:09:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:12.671 10:09:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:12.671 10:09:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:12.671 10:09:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.671 10:09:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:12.671 10:09:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:13.609 10:09:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:13.609 10:09:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:13.609 10:09:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:13.609 10:09:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:13.609 10:09:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:13.609 10:09:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.609 10:09:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:13.609 10:09:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.609 10:09:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:13.609 10:09:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:14.547 10:09:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:14.547 10:09:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:14.547 10:09:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.547 10:09:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:14.547 10:09:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:14.547 10:09:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:14.547 10:09:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:14.547 10:09:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.548 [2024-07-15 10:09:28.123188] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:28:14.548 [2024-07-15 10:09:28.123245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:14.548 [2024-07-15 10:09:28.123270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.548 [2024-07-15 10:09:28.123280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:14.548 [2024-07-15 10:09:28.123286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.548 [2024-07-15 10:09:28.123293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:14.548 [2024-07-15 10:09:28.123299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.548 [2024-07-15 10:09:28.123306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:14.548 [2024-07-15 10:09:28.123311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.548 [2024-07-15 10:09:28.123318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:14.548 [2024-07-15 10:09:28.123324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.548 [2024-07-15 10:09:28.123331] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x665900 is same with the state(5) to be set 00:28:14.806 [2024-07-15 10:09:28.133165] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x665900 (9): Bad file descriptor 00:28:14.806 [2024-07-15 10:09:28.143163] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:14.806 10:09:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:14.806 10:09:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:15.744 10:09:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:15.744 10:09:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:15.744 10:09:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:15.744 10:09:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:15.744 10:09:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.744 10:09:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:15.744 10:09:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:15.744 [2024-07-15 10:09:29.188697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:28:15.744 [2024-07-15 10:09:29.188808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x665900 with addr=10.0.0.2, port=4420 00:28:15.744 [2024-07-15 10:09:29.188828] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x665900 is same with the state(5) to be set 00:28:15.744 [2024-07-15 10:09:29.188877] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x665900 (9): Bad file descriptor 00:28:15.744 [2024-07-15 10:09:29.189412] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:15.744 [2024-07-15 10:09:29.189442] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:15.744 [2024-07-15 10:09:29.189451] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:15.744 [2024-07-15 10:09:29.189461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:15.744 [2024-07-15 10:09:29.189487] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:15.744 [2024-07-15 10:09:29.189498] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:15.744 10:09:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.744 10:09:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:15.744 10:09:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:16.680 [2024-07-15 10:09:30.187617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:16.680 [2024-07-15 10:09:30.187685] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:16.680 [2024-07-15 10:09:30.187692] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:16.680 [2024-07-15 10:09:30.187699] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:28:16.680 [2024-07-15 10:09:30.187716] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:16.680 [2024-07-15 10:09:30.187741] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:28:16.680 [2024-07-15 10:09:30.187789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.680 [2024-07-15 10:09:30.187799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.680 [2024-07-15 10:09:30.187808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.681 [2024-07-15 10:09:30.187816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.681 [2024-07-15 10:09:30.187822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.681 [2024-07-15 10:09:30.187827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.681 [2024-07-15 10:09:30.187833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.681 [2024-07-15 10:09:30.187838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.681 [2024-07-15 10:09:30.187843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.681 [2024-07-15 10:09:30.187848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.681 [2024-07-15 10:09:30.187853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:28:16.681 [2024-07-15 10:09:30.187880] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6083e0 (9): Bad file descriptor 00:28:16.681 [2024-07-15 10:09:30.188872] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:28:16.681 [2024-07-15 10:09:30.188890] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:28:16.681 10:09:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:16.681 10:09:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:16.681 10:09:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.681 10:09:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:16.681 10:09:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:16.681 10:09:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:16.681 10:09:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:16.681 10:09:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.940 10:09:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:28:16.940 10:09:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:28:16.940 10:09:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:16.940 10:09:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:28:16.940 10:09:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:16.940 10:09:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:16.940 10:09:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.940 10:09:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:16.940 10:09:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:16.940 10:09:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:16.940 10:09:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:16.940 10:09:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.940 10:09:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:16.940 10:09:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:17.877 10:09:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:17.877 10:09:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:17.877 10:09:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:17.877 10:09:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.877 10:09:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:17.877 10:09:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:17.877 10:09:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:17.877 10:09:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.877 10:09:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:17.877 10:09:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:18.813 [2024-07-15 10:09:32.194049] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:18.813 [2024-07-15 10:09:32.194103] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:18.813 [2024-07-15 10:09:32.194120] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:18.813 [2024-07-15 10:09:32.279978] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:28:18.813 [2024-07-15 10:09:32.335652] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:18.813 [2024-07-15 10:09:32.335727] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:18.813 [2024-07-15 10:09:32.335744] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:18.813 [2024-07-15 10:09:32.335759] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:28:18.813 [2024-07-15 10:09:32.335766] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:18.813 [2024-07-15 10:09:32.342501] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x681300 was disconnected and freed. delete nvme_qpair. 00:28:19.071 10:09:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:19.071 10:09:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:19.071 10:09:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:19.071 10:09:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:19.071 10:09:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.071 10:09:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:19.071 10:09:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:19.071 10:09:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.071 10:09:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:28:19.071 10:09:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:28:19.071 10:09:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 90528 00:28:19.071 10:09:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 90528 ']' 00:28:19.071 10:09:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 90528 00:28:19.071 10:09:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:28:19.071 10:09:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:19.071 10:09:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90528 00:28:19.071 killing process with pid 90528 00:28:19.071 10:09:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:19.071 10:09:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:19.071 10:09:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90528' 00:28:19.071 10:09:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 90528 00:28:19.071 10:09:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 90528 00:28:19.330 10:09:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:28:19.330 10:09:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:19.330 10:09:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:28:19.330 10:09:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:19.330 10:09:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:28:19.330 10:09:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:19.330 10:09:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:19.330 rmmod nvme_tcp 00:28:19.330 rmmod nvme_fabrics 00:28:19.330 rmmod nvme_keyring 00:28:19.330 10:09:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:19.330 10:09:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:28:19.330 10:09:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:28:19.330 10:09:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 90477 ']' 00:28:19.330 10:09:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 90477 00:28:19.330 10:09:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 90477 ']' 00:28:19.330 10:09:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 90477 00:28:19.330 10:09:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:28:19.330 10:09:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:19.330 10:09:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90477 00:28:19.330 10:09:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:19.330 killing process with pid 90477 00:28:19.330 10:09:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:19.330 10:09:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90477' 00:28:19.330 10:09:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 90477 00:28:19.330 10:09:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 90477 00:28:19.589 10:09:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:19.589 10:09:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:19.589 10:09:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:19.589 10:09:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:19.589 10:09:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:19.589 10:09:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:19.590 10:09:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:19.590 10:09:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:19.590 10:09:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:28:19.590 00:28:19.590 real 0m14.070s 00:28:19.590 user 0m25.221s 00:28:19.590 sys 0m1.572s 00:28:19.590 10:09:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:19.590 10:09:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:19.590 ************************************ 00:28:19.590 END TEST nvmf_discovery_remove_ifc 00:28:19.590 ************************************ 00:28:19.590 10:09:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:19.590 10:09:33 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:28:19.590 10:09:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:19.590 10:09:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:19.590 10:09:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:19.590 ************************************ 00:28:19.590 START TEST nvmf_identify_kernel_target 00:28:19.590 ************************************ 00:28:19.590 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:28:19.851 * Looking for test storage... 00:28:19.851 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:19.851 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:28:19.852 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:28:19.852 Cannot find device "nvmf_tgt_br" 00:28:19.852 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:28:19.852 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:28:19.852 Cannot find device "nvmf_tgt_br2" 00:28:19.852 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:28:19.852 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:28:19.852 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:28:19.852 Cannot find device "nvmf_tgt_br" 00:28:19.852 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:28:19.852 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:28:19.852 Cannot find device "nvmf_tgt_br2" 00:28:19.852 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:28:19.852 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:28:19.852 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:28:19.852 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:19.852 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:20.112 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:28:20.112 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:20.112 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:20.112 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:28:20.112 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:28:20.112 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:20.112 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:20.112 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:20.112 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:20.112 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:20.112 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:20.112 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:28:20.112 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:28:20.112 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:28:20.112 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:28:20.112 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:28:20.112 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:28:20.112 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:20.112 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:20.113 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:20.113 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:28:20.113 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:28:20.113 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:28:20.113 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:20.113 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:20.113 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:20.113 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:20.113 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:28:20.113 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:20.113 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:28:20.113 00:28:20.113 --- 10.0.0.2 ping statistics --- 00:28:20.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:20.113 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:28:20.113 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:28:20.113 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:20.113 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.030 ms 00:28:20.113 00:28:20.113 --- 10.0.0.3 ping statistics --- 00:28:20.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:20.113 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:28:20.113 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:20.113 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:20.113 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:28:20.113 00:28:20.113 --- 10.0.0.1 ping statistics --- 00:28:20.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:20.113 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:28:20.113 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:20.113 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:28:20.113 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:20.113 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:20.113 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:20.113 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:20.113 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:20.113 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:20.113 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:20.113 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:28:20.113 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:28:20.113 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:28:20.113 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:20.113 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:20.113 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.113 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.113 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:20.113 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.113 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:20.113 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:20.113 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:20.113 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:28:20.113 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:20.113 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:20.113 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:28:20.113 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:20.113 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:20.113 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:20.113 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:28:20.113 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:28:20.113 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:28:20.113 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:20.113 10:09:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:20.690 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:20.690 Waiting for block devices as requested 00:28:20.690 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:20.959 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:20.959 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:28:20.959 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:20.959 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:28:20.959 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:28:20.959 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:20.959 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:28:20.959 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:28:20.959 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:28:20.959 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:28:20.959 No valid GPT data, bailing 00:28:20.959 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:20.959 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:28:20.959 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:28:20.960 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:28:20.960 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:28:20.960 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:28:20.960 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:28:20.960 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:28:20.960 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:28:20.960 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:28:20.960 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:28:20.960 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:28:20.960 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:28:20.960 No valid GPT data, bailing 00:28:20.960 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:28:20.960 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:28:20.960 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:28:20.960 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:28:20.960 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:28:20.960 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:28:20.960 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:28:20.960 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:28:20.960 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:28:20.960 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:28:20.960 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:28:20.960 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:28:20.960 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:28:21.220 No valid GPT data, bailing 00:28:21.220 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:28:21.220 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:28:21.220 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:28:21.220 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:28:21.220 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:28:21.220 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:28:21.220 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:28:21.220 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:28:21.220 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:28:21.220 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:28:21.220 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:28:21.220 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:28:21.220 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:28:21.220 No valid GPT data, bailing 00:28:21.220 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:28:21.220 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:28:21.220 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:28:21.220 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:28:21.220 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:28:21.220 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:21.220 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:21.220 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:21.220 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:21.220 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:28:21.220 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:28:21.220 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:28:21.220 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:28:21.220 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:28:21.220 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:28:21.220 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:28:21.220 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:21.220 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid=a2b6b25a-cc90-4aea-9f09-c06f8a634aec -a 10.0.0.1 -t tcp -s 4420 00:28:21.220 00:28:21.220 Discovery Log Number of Records 2, Generation counter 2 00:28:21.220 =====Discovery Log Entry 0====== 00:28:21.220 trtype: tcp 00:28:21.220 adrfam: ipv4 00:28:21.220 subtype: current discovery subsystem 00:28:21.220 treq: not specified, sq flow control disable supported 00:28:21.220 portid: 1 00:28:21.220 trsvcid: 4420 00:28:21.220 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:21.220 traddr: 10.0.0.1 00:28:21.220 eflags: none 00:28:21.220 sectype: none 00:28:21.220 =====Discovery Log Entry 1====== 00:28:21.220 trtype: tcp 00:28:21.220 adrfam: ipv4 00:28:21.220 subtype: nvme subsystem 00:28:21.220 treq: not specified, sq flow control disable supported 00:28:21.220 portid: 1 00:28:21.220 trsvcid: 4420 00:28:21.220 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:21.220 traddr: 10.0.0.1 00:28:21.220 eflags: none 00:28:21.220 sectype: none 00:28:21.220 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:28:21.220 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:28:21.481 ===================================================== 00:28:21.481 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:21.481 ===================================================== 00:28:21.481 Controller Capabilities/Features 00:28:21.481 ================================ 00:28:21.481 Vendor ID: 0000 00:28:21.481 Subsystem Vendor ID: 0000 00:28:21.481 Serial Number: d81c2a43a69b160c631f 00:28:21.481 Model Number: Linux 00:28:21.481 Firmware Version: 6.7.0-68 00:28:21.481 Recommended Arb Burst: 0 00:28:21.481 IEEE OUI Identifier: 00 00 00 00:28:21.481 Multi-path I/O 00:28:21.481 May have multiple subsystem ports: No 00:28:21.481 May have multiple controllers: No 00:28:21.481 Associated with SR-IOV VF: No 00:28:21.481 Max Data Transfer Size: Unlimited 00:28:21.481 Max Number of Namespaces: 0 00:28:21.481 Max Number of I/O Queues: 1024 00:28:21.481 NVMe Specification Version (VS): 1.3 00:28:21.481 NVMe Specification Version (Identify): 1.3 00:28:21.481 Maximum Queue Entries: 1024 00:28:21.481 Contiguous Queues Required: No 00:28:21.481 Arbitration Mechanisms Supported 00:28:21.481 Weighted Round Robin: Not Supported 00:28:21.481 Vendor Specific: Not Supported 00:28:21.481 Reset Timeout: 7500 ms 00:28:21.481 Doorbell Stride: 4 bytes 00:28:21.481 NVM Subsystem Reset: Not Supported 00:28:21.481 Command Sets Supported 00:28:21.481 NVM Command Set: Supported 00:28:21.481 Boot Partition: Not Supported 00:28:21.481 Memory Page Size Minimum: 4096 bytes 00:28:21.481 Memory Page Size Maximum: 4096 bytes 00:28:21.481 Persistent Memory Region: Not Supported 00:28:21.481 Optional Asynchronous Events Supported 00:28:21.481 Namespace Attribute Notices: Not Supported 00:28:21.481 Firmware Activation Notices: Not Supported 00:28:21.481 ANA Change Notices: Not Supported 00:28:21.481 PLE Aggregate Log Change Notices: Not Supported 00:28:21.481 LBA Status Info Alert Notices: Not Supported 00:28:21.481 EGE Aggregate Log Change Notices: Not Supported 00:28:21.481 Normal NVM Subsystem Shutdown event: Not Supported 00:28:21.481 Zone Descriptor Change Notices: Not Supported 00:28:21.481 Discovery Log Change Notices: Supported 00:28:21.481 Controller Attributes 00:28:21.481 128-bit Host Identifier: Not Supported 00:28:21.481 Non-Operational Permissive Mode: Not Supported 00:28:21.481 NVM Sets: Not Supported 00:28:21.481 Read Recovery Levels: Not Supported 00:28:21.481 Endurance Groups: Not Supported 00:28:21.481 Predictable Latency Mode: Not Supported 00:28:21.481 Traffic Based Keep ALive: Not Supported 00:28:21.481 Namespace Granularity: Not Supported 00:28:21.481 SQ Associations: Not Supported 00:28:21.481 UUID List: Not Supported 00:28:21.481 Multi-Domain Subsystem: Not Supported 00:28:21.481 Fixed Capacity Management: Not Supported 00:28:21.481 Variable Capacity Management: Not Supported 00:28:21.481 Delete Endurance Group: Not Supported 00:28:21.481 Delete NVM Set: Not Supported 00:28:21.481 Extended LBA Formats Supported: Not Supported 00:28:21.481 Flexible Data Placement Supported: Not Supported 00:28:21.481 00:28:21.481 Controller Memory Buffer Support 00:28:21.481 ================================ 00:28:21.481 Supported: No 00:28:21.481 00:28:21.481 Persistent Memory Region Support 00:28:21.481 ================================ 00:28:21.481 Supported: No 00:28:21.481 00:28:21.481 Admin Command Set Attributes 00:28:21.481 ============================ 00:28:21.481 Security Send/Receive: Not Supported 00:28:21.481 Format NVM: Not Supported 00:28:21.481 Firmware Activate/Download: Not Supported 00:28:21.481 Namespace Management: Not Supported 00:28:21.481 Device Self-Test: Not Supported 00:28:21.481 Directives: Not Supported 00:28:21.481 NVMe-MI: Not Supported 00:28:21.482 Virtualization Management: Not Supported 00:28:21.482 Doorbell Buffer Config: Not Supported 00:28:21.482 Get LBA Status Capability: Not Supported 00:28:21.482 Command & Feature Lockdown Capability: Not Supported 00:28:21.482 Abort Command Limit: 1 00:28:21.482 Async Event Request Limit: 1 00:28:21.482 Number of Firmware Slots: N/A 00:28:21.482 Firmware Slot 1 Read-Only: N/A 00:28:21.482 Firmware Activation Without Reset: N/A 00:28:21.482 Multiple Update Detection Support: N/A 00:28:21.482 Firmware Update Granularity: No Information Provided 00:28:21.482 Per-Namespace SMART Log: No 00:28:21.482 Asymmetric Namespace Access Log Page: Not Supported 00:28:21.482 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:21.482 Command Effects Log Page: Not Supported 00:28:21.482 Get Log Page Extended Data: Supported 00:28:21.482 Telemetry Log Pages: Not Supported 00:28:21.482 Persistent Event Log Pages: Not Supported 00:28:21.482 Supported Log Pages Log Page: May Support 00:28:21.482 Commands Supported & Effects Log Page: Not Supported 00:28:21.482 Feature Identifiers & Effects Log Page:May Support 00:28:21.482 NVMe-MI Commands & Effects Log Page: May Support 00:28:21.482 Data Area 4 for Telemetry Log: Not Supported 00:28:21.482 Error Log Page Entries Supported: 1 00:28:21.482 Keep Alive: Not Supported 00:28:21.482 00:28:21.482 NVM Command Set Attributes 00:28:21.482 ========================== 00:28:21.482 Submission Queue Entry Size 00:28:21.482 Max: 1 00:28:21.482 Min: 1 00:28:21.482 Completion Queue Entry Size 00:28:21.482 Max: 1 00:28:21.482 Min: 1 00:28:21.482 Number of Namespaces: 0 00:28:21.482 Compare Command: Not Supported 00:28:21.482 Write Uncorrectable Command: Not Supported 00:28:21.482 Dataset Management Command: Not Supported 00:28:21.482 Write Zeroes Command: Not Supported 00:28:21.482 Set Features Save Field: Not Supported 00:28:21.482 Reservations: Not Supported 00:28:21.482 Timestamp: Not Supported 00:28:21.482 Copy: Not Supported 00:28:21.482 Volatile Write Cache: Not Present 00:28:21.482 Atomic Write Unit (Normal): 1 00:28:21.482 Atomic Write Unit (PFail): 1 00:28:21.482 Atomic Compare & Write Unit: 1 00:28:21.482 Fused Compare & Write: Not Supported 00:28:21.482 Scatter-Gather List 00:28:21.482 SGL Command Set: Supported 00:28:21.482 SGL Keyed: Not Supported 00:28:21.482 SGL Bit Bucket Descriptor: Not Supported 00:28:21.482 SGL Metadata Pointer: Not Supported 00:28:21.482 Oversized SGL: Not Supported 00:28:21.482 SGL Metadata Address: Not Supported 00:28:21.482 SGL Offset: Supported 00:28:21.482 Transport SGL Data Block: Not Supported 00:28:21.482 Replay Protected Memory Block: Not Supported 00:28:21.482 00:28:21.482 Firmware Slot Information 00:28:21.482 ========================= 00:28:21.482 Active slot: 0 00:28:21.482 00:28:21.482 00:28:21.482 Error Log 00:28:21.482 ========= 00:28:21.482 00:28:21.482 Active Namespaces 00:28:21.482 ================= 00:28:21.482 Discovery Log Page 00:28:21.482 ================== 00:28:21.482 Generation Counter: 2 00:28:21.482 Number of Records: 2 00:28:21.482 Record Format: 0 00:28:21.482 00:28:21.482 Discovery Log Entry 0 00:28:21.482 ---------------------- 00:28:21.482 Transport Type: 3 (TCP) 00:28:21.482 Address Family: 1 (IPv4) 00:28:21.482 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:21.482 Entry Flags: 00:28:21.482 Duplicate Returned Information: 0 00:28:21.482 Explicit Persistent Connection Support for Discovery: 0 00:28:21.482 Transport Requirements: 00:28:21.482 Secure Channel: Not Specified 00:28:21.482 Port ID: 1 (0x0001) 00:28:21.482 Controller ID: 65535 (0xffff) 00:28:21.482 Admin Max SQ Size: 32 00:28:21.482 Transport Service Identifier: 4420 00:28:21.482 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:21.482 Transport Address: 10.0.0.1 00:28:21.482 Discovery Log Entry 1 00:28:21.482 ---------------------- 00:28:21.482 Transport Type: 3 (TCP) 00:28:21.482 Address Family: 1 (IPv4) 00:28:21.482 Subsystem Type: 2 (NVM Subsystem) 00:28:21.482 Entry Flags: 00:28:21.482 Duplicate Returned Information: 0 00:28:21.482 Explicit Persistent Connection Support for Discovery: 0 00:28:21.482 Transport Requirements: 00:28:21.482 Secure Channel: Not Specified 00:28:21.482 Port ID: 1 (0x0001) 00:28:21.482 Controller ID: 65535 (0xffff) 00:28:21.482 Admin Max SQ Size: 32 00:28:21.482 Transport Service Identifier: 4420 00:28:21.482 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:28:21.482 Transport Address: 10.0.0.1 00:28:21.482 10:09:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:21.482 get_feature(0x01) failed 00:28:21.482 get_feature(0x02) failed 00:28:21.482 get_feature(0x04) failed 00:28:21.482 ===================================================== 00:28:21.482 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:21.482 ===================================================== 00:28:21.482 Controller Capabilities/Features 00:28:21.482 ================================ 00:28:21.482 Vendor ID: 0000 00:28:21.482 Subsystem Vendor ID: 0000 00:28:21.482 Serial Number: 8b63016b173d978571bf 00:28:21.482 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:28:21.482 Firmware Version: 6.7.0-68 00:28:21.482 Recommended Arb Burst: 6 00:28:21.482 IEEE OUI Identifier: 00 00 00 00:28:21.482 Multi-path I/O 00:28:21.482 May have multiple subsystem ports: Yes 00:28:21.482 May have multiple controllers: Yes 00:28:21.482 Associated with SR-IOV VF: No 00:28:21.482 Max Data Transfer Size: Unlimited 00:28:21.482 Max Number of Namespaces: 1024 00:28:21.482 Max Number of I/O Queues: 128 00:28:21.482 NVMe Specification Version (VS): 1.3 00:28:21.482 NVMe Specification Version (Identify): 1.3 00:28:21.482 Maximum Queue Entries: 1024 00:28:21.482 Contiguous Queues Required: No 00:28:21.482 Arbitration Mechanisms Supported 00:28:21.482 Weighted Round Robin: Not Supported 00:28:21.482 Vendor Specific: Not Supported 00:28:21.482 Reset Timeout: 7500 ms 00:28:21.482 Doorbell Stride: 4 bytes 00:28:21.482 NVM Subsystem Reset: Not Supported 00:28:21.482 Command Sets Supported 00:28:21.482 NVM Command Set: Supported 00:28:21.482 Boot Partition: Not Supported 00:28:21.482 Memory Page Size Minimum: 4096 bytes 00:28:21.482 Memory Page Size Maximum: 4096 bytes 00:28:21.482 Persistent Memory Region: Not Supported 00:28:21.482 Optional Asynchronous Events Supported 00:28:21.482 Namespace Attribute Notices: Supported 00:28:21.482 Firmware Activation Notices: Not Supported 00:28:21.482 ANA Change Notices: Supported 00:28:21.482 PLE Aggregate Log Change Notices: Not Supported 00:28:21.482 LBA Status Info Alert Notices: Not Supported 00:28:21.482 EGE Aggregate Log Change Notices: Not Supported 00:28:21.482 Normal NVM Subsystem Shutdown event: Not Supported 00:28:21.482 Zone Descriptor Change Notices: Not Supported 00:28:21.482 Discovery Log Change Notices: Not Supported 00:28:21.482 Controller Attributes 00:28:21.482 128-bit Host Identifier: Supported 00:28:21.482 Non-Operational Permissive Mode: Not Supported 00:28:21.482 NVM Sets: Not Supported 00:28:21.482 Read Recovery Levels: Not Supported 00:28:21.482 Endurance Groups: Not Supported 00:28:21.482 Predictable Latency Mode: Not Supported 00:28:21.482 Traffic Based Keep ALive: Supported 00:28:21.482 Namespace Granularity: Not Supported 00:28:21.482 SQ Associations: Not Supported 00:28:21.482 UUID List: Not Supported 00:28:21.482 Multi-Domain Subsystem: Not Supported 00:28:21.482 Fixed Capacity Management: Not Supported 00:28:21.482 Variable Capacity Management: Not Supported 00:28:21.482 Delete Endurance Group: Not Supported 00:28:21.482 Delete NVM Set: Not Supported 00:28:21.482 Extended LBA Formats Supported: Not Supported 00:28:21.482 Flexible Data Placement Supported: Not Supported 00:28:21.482 00:28:21.482 Controller Memory Buffer Support 00:28:21.482 ================================ 00:28:21.482 Supported: No 00:28:21.482 00:28:21.482 Persistent Memory Region Support 00:28:21.482 ================================ 00:28:21.482 Supported: No 00:28:21.482 00:28:21.482 Admin Command Set Attributes 00:28:21.482 ============================ 00:28:21.482 Security Send/Receive: Not Supported 00:28:21.482 Format NVM: Not Supported 00:28:21.482 Firmware Activate/Download: Not Supported 00:28:21.482 Namespace Management: Not Supported 00:28:21.482 Device Self-Test: Not Supported 00:28:21.482 Directives: Not Supported 00:28:21.482 NVMe-MI: Not Supported 00:28:21.482 Virtualization Management: Not Supported 00:28:21.482 Doorbell Buffer Config: Not Supported 00:28:21.482 Get LBA Status Capability: Not Supported 00:28:21.482 Command & Feature Lockdown Capability: Not Supported 00:28:21.482 Abort Command Limit: 4 00:28:21.482 Async Event Request Limit: 4 00:28:21.482 Number of Firmware Slots: N/A 00:28:21.482 Firmware Slot 1 Read-Only: N/A 00:28:21.482 Firmware Activation Without Reset: N/A 00:28:21.482 Multiple Update Detection Support: N/A 00:28:21.482 Firmware Update Granularity: No Information Provided 00:28:21.482 Per-Namespace SMART Log: Yes 00:28:21.482 Asymmetric Namespace Access Log Page: Supported 00:28:21.482 ANA Transition Time : 10 sec 00:28:21.482 00:28:21.482 Asymmetric Namespace Access Capabilities 00:28:21.482 ANA Optimized State : Supported 00:28:21.483 ANA Non-Optimized State : Supported 00:28:21.483 ANA Inaccessible State : Supported 00:28:21.483 ANA Persistent Loss State : Supported 00:28:21.483 ANA Change State : Supported 00:28:21.483 ANAGRPID is not changed : No 00:28:21.483 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:28:21.483 00:28:21.483 ANA Group Identifier Maximum : 128 00:28:21.483 Number of ANA Group Identifiers : 128 00:28:21.483 Max Number of Allowed Namespaces : 1024 00:28:21.483 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:28:21.483 Command Effects Log Page: Supported 00:28:21.483 Get Log Page Extended Data: Supported 00:28:21.483 Telemetry Log Pages: Not Supported 00:28:21.483 Persistent Event Log Pages: Not Supported 00:28:21.483 Supported Log Pages Log Page: May Support 00:28:21.483 Commands Supported & Effects Log Page: Not Supported 00:28:21.483 Feature Identifiers & Effects Log Page:May Support 00:28:21.483 NVMe-MI Commands & Effects Log Page: May Support 00:28:21.483 Data Area 4 for Telemetry Log: Not Supported 00:28:21.483 Error Log Page Entries Supported: 128 00:28:21.483 Keep Alive: Supported 00:28:21.483 Keep Alive Granularity: 1000 ms 00:28:21.483 00:28:21.483 NVM Command Set Attributes 00:28:21.483 ========================== 00:28:21.483 Submission Queue Entry Size 00:28:21.483 Max: 64 00:28:21.483 Min: 64 00:28:21.483 Completion Queue Entry Size 00:28:21.483 Max: 16 00:28:21.483 Min: 16 00:28:21.483 Number of Namespaces: 1024 00:28:21.483 Compare Command: Not Supported 00:28:21.483 Write Uncorrectable Command: Not Supported 00:28:21.483 Dataset Management Command: Supported 00:28:21.483 Write Zeroes Command: Supported 00:28:21.483 Set Features Save Field: Not Supported 00:28:21.483 Reservations: Not Supported 00:28:21.483 Timestamp: Not Supported 00:28:21.483 Copy: Not Supported 00:28:21.483 Volatile Write Cache: Present 00:28:21.483 Atomic Write Unit (Normal): 1 00:28:21.483 Atomic Write Unit (PFail): 1 00:28:21.483 Atomic Compare & Write Unit: 1 00:28:21.483 Fused Compare & Write: Not Supported 00:28:21.483 Scatter-Gather List 00:28:21.483 SGL Command Set: Supported 00:28:21.483 SGL Keyed: Not Supported 00:28:21.483 SGL Bit Bucket Descriptor: Not Supported 00:28:21.483 SGL Metadata Pointer: Not Supported 00:28:21.483 Oversized SGL: Not Supported 00:28:21.483 SGL Metadata Address: Not Supported 00:28:21.483 SGL Offset: Supported 00:28:21.483 Transport SGL Data Block: Not Supported 00:28:21.483 Replay Protected Memory Block: Not Supported 00:28:21.483 00:28:21.483 Firmware Slot Information 00:28:21.483 ========================= 00:28:21.483 Active slot: 0 00:28:21.483 00:28:21.483 Asymmetric Namespace Access 00:28:21.483 =========================== 00:28:21.483 Change Count : 0 00:28:21.483 Number of ANA Group Descriptors : 1 00:28:21.483 ANA Group Descriptor : 0 00:28:21.483 ANA Group ID : 1 00:28:21.483 Number of NSID Values : 1 00:28:21.483 Change Count : 0 00:28:21.483 ANA State : 1 00:28:21.483 Namespace Identifier : 1 00:28:21.483 00:28:21.483 Commands Supported and Effects 00:28:21.483 ============================== 00:28:21.483 Admin Commands 00:28:21.483 -------------- 00:28:21.483 Get Log Page (02h): Supported 00:28:21.483 Identify (06h): Supported 00:28:21.483 Abort (08h): Supported 00:28:21.483 Set Features (09h): Supported 00:28:21.483 Get Features (0Ah): Supported 00:28:21.483 Asynchronous Event Request (0Ch): Supported 00:28:21.483 Keep Alive (18h): Supported 00:28:21.483 I/O Commands 00:28:21.483 ------------ 00:28:21.483 Flush (00h): Supported 00:28:21.483 Write (01h): Supported LBA-Change 00:28:21.483 Read (02h): Supported 00:28:21.483 Write Zeroes (08h): Supported LBA-Change 00:28:21.483 Dataset Management (09h): Supported 00:28:21.483 00:28:21.483 Error Log 00:28:21.483 ========= 00:28:21.483 Entry: 0 00:28:21.483 Error Count: 0x3 00:28:21.483 Submission Queue Id: 0x0 00:28:21.483 Command Id: 0x5 00:28:21.483 Phase Bit: 0 00:28:21.483 Status Code: 0x2 00:28:21.483 Status Code Type: 0x0 00:28:21.483 Do Not Retry: 1 00:28:21.483 Error Location: 0x28 00:28:21.483 LBA: 0x0 00:28:21.483 Namespace: 0x0 00:28:21.483 Vendor Log Page: 0x0 00:28:21.483 ----------- 00:28:21.483 Entry: 1 00:28:21.483 Error Count: 0x2 00:28:21.483 Submission Queue Id: 0x0 00:28:21.483 Command Id: 0x5 00:28:21.483 Phase Bit: 0 00:28:21.483 Status Code: 0x2 00:28:21.483 Status Code Type: 0x0 00:28:21.483 Do Not Retry: 1 00:28:21.483 Error Location: 0x28 00:28:21.483 LBA: 0x0 00:28:21.483 Namespace: 0x0 00:28:21.483 Vendor Log Page: 0x0 00:28:21.483 ----------- 00:28:21.483 Entry: 2 00:28:21.483 Error Count: 0x1 00:28:21.483 Submission Queue Id: 0x0 00:28:21.483 Command Id: 0x4 00:28:21.483 Phase Bit: 0 00:28:21.483 Status Code: 0x2 00:28:21.483 Status Code Type: 0x0 00:28:21.483 Do Not Retry: 1 00:28:21.483 Error Location: 0x28 00:28:21.483 LBA: 0x0 00:28:21.483 Namespace: 0x0 00:28:21.483 Vendor Log Page: 0x0 00:28:21.483 00:28:21.483 Number of Queues 00:28:21.483 ================ 00:28:21.483 Number of I/O Submission Queues: 128 00:28:21.483 Number of I/O Completion Queues: 128 00:28:21.483 00:28:21.483 ZNS Specific Controller Data 00:28:21.483 ============================ 00:28:21.483 Zone Append Size Limit: 0 00:28:21.483 00:28:21.483 00:28:21.483 Active Namespaces 00:28:21.483 ================= 00:28:21.483 get_feature(0x05) failed 00:28:21.483 Namespace ID:1 00:28:21.483 Command Set Identifier: NVM (00h) 00:28:21.483 Deallocate: Supported 00:28:21.483 Deallocated/Unwritten Error: Not Supported 00:28:21.483 Deallocated Read Value: Unknown 00:28:21.483 Deallocate in Write Zeroes: Not Supported 00:28:21.483 Deallocated Guard Field: 0xFFFF 00:28:21.483 Flush: Supported 00:28:21.483 Reservation: Not Supported 00:28:21.483 Namespace Sharing Capabilities: Multiple Controllers 00:28:21.483 Size (in LBAs): 1310720 (5GiB) 00:28:21.483 Capacity (in LBAs): 1310720 (5GiB) 00:28:21.483 Utilization (in LBAs): 1310720 (5GiB) 00:28:21.483 UUID: 44a03434-7c25-4b10-a119-30ae969163e0 00:28:21.483 Thin Provisioning: Not Supported 00:28:21.483 Per-NS Atomic Units: Yes 00:28:21.483 Atomic Boundary Size (Normal): 0 00:28:21.483 Atomic Boundary Size (PFail): 0 00:28:21.483 Atomic Boundary Offset: 0 00:28:21.483 NGUID/EUI64 Never Reused: No 00:28:21.483 ANA group ID: 1 00:28:21.483 Namespace Write Protected: No 00:28:21.483 Number of LBA Formats: 1 00:28:21.483 Current LBA Format: LBA Format #00 00:28:21.483 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:28:21.483 00:28:21.483 10:09:35 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:28:21.483 10:09:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:21.483 10:09:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:28:21.742 10:09:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:21.742 10:09:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:28:21.742 10:09:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:21.742 10:09:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:21.742 rmmod nvme_tcp 00:28:21.742 rmmod nvme_fabrics 00:28:21.742 10:09:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:21.742 10:09:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:28:21.742 10:09:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:28:21.742 10:09:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:28:21.742 10:09:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:21.742 10:09:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:21.742 10:09:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:21.742 10:09:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:21.742 10:09:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:21.742 10:09:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:21.742 10:09:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:21.742 10:09:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:21.742 10:09:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:28:21.742 10:09:35 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:28:21.742 10:09:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:28:21.742 10:09:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:28:21.742 10:09:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:21.742 10:09:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:21.742 10:09:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:21.742 10:09:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:21.742 10:09:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:28:21.742 10:09:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:28:21.742 10:09:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:22.679 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:22.679 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:28:22.679 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:28:22.938 00:28:22.938 real 0m3.143s 00:28:22.938 user 0m1.073s 00:28:22.938 sys 0m1.672s 00:28:22.938 10:09:36 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:22.938 10:09:36 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:28:22.938 ************************************ 00:28:22.938 END TEST nvmf_identify_kernel_target 00:28:22.938 ************************************ 00:28:22.938 10:09:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:22.938 10:09:36 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:28:22.938 10:09:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:22.938 10:09:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:22.938 10:09:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:22.938 ************************************ 00:28:22.938 START TEST nvmf_auth_host 00:28:22.938 ************************************ 00:28:22.938 10:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:28:22.938 * Looking for test storage... 00:28:22.938 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:28:22.938 10:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:22.938 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:28:22.938 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:22.938 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:22.938 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:22.938 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:22.938 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:22.938 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:22.938 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:22.938 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:22.938 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:22.938 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:22.938 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:28:22.938 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:28:22.938 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:22.938 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:22.938 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:22.938 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:22.938 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:22.938 10:09:36 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:22.938 10:09:36 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:22.938 10:09:36 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:22.939 10:09:36 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.939 10:09:36 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.939 10:09:36 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.939 10:09:36 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:28:22.939 10:09:36 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.939 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:28:22.939 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:22.939 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:22.939 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:22.939 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:22.939 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:22.939 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:22.939 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:22.939 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:22.939 10:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:28:22.939 10:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:28:22.939 10:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:28:22.939 10:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:28:22.939 10:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:22.939 10:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:22.939 10:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:28:22.939 10:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:28:22.939 10:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:28:22.939 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:22.939 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:22.939 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:22.939 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:22.939 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:22.939 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:22.939 10:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:22.939 10:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:22.939 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:28:22.939 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:28:22.939 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:28:22.939 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:28:22.939 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:28:22.939 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:28:22.939 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:22.939 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:22.939 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:28:22.939 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:28:22.939 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:22.939 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:22.939 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:22.939 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:22.939 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:22.939 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:22.939 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:22.939 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:22.939 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:28:23.199 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:28:23.199 Cannot find device "nvmf_tgt_br" 00:28:23.199 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:28:23.199 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:28:23.199 Cannot find device "nvmf_tgt_br2" 00:28:23.199 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:28:23.199 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:28:23.199 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:28:23.199 Cannot find device "nvmf_tgt_br" 00:28:23.199 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:28:23.199 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:28:23.199 Cannot find device "nvmf_tgt_br2" 00:28:23.199 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:28:23.199 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:28:23.199 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:28:23.199 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:23.199 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:23.199 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:28:23.199 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:23.199 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:23.199 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:28:23.199 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:28:23.199 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:23.199 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:23.199 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:23.199 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:23.199 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:23.199 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:23.199 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:28:23.199 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:28:23.199 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:28:23.199 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:28:23.199 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:28:23.199 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:28:23.458 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:23.458 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:23.458 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:23.458 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:28:23.458 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:28:23.458 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:28:23.458 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:23.458 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:23.458 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:23.458 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:23.458 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:28:23.458 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:23.458 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:28:23.458 00:28:23.458 --- 10.0.0.2 ping statistics --- 00:28:23.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:23.458 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:28:23.458 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:28:23.458 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:23.458 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:28:23.458 00:28:23.458 --- 10.0.0.3 ping statistics --- 00:28:23.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:23.458 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:28:23.458 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:23.458 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:23.458 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:28:23.458 00:28:23.458 --- 10.0.0.1 ping statistics --- 00:28:23.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:23.459 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:28:23.459 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:23.459 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:28:23.459 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:23.459 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:23.459 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:23.459 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:23.459 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:23.459 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:23.459 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:23.459 10:09:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:28:23.459 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:23.459 10:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:23.459 10:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.459 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:28:23.459 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=91431 00:28:23.459 10:09:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 91431 00:28:23.459 10:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 91431 ']' 00:28:23.459 10:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:23.459 10:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:23.459 10:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:23.459 10:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:23.459 10:09:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.396 10:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:24.396 10:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:28:24.396 10:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:24.396 10:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:24.396 10:09:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.396 10:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:24.396 10:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:28:24.396 10:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:28:24.396 10:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:24.396 10:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:24.396 10:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:24.396 10:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:28:24.396 10:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:28:24.396 10:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:24.396 10:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d8244ac4fb4be0788a7f6e9590348254 00:28:24.396 10:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:28:24.396 10:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.318 00:28:24.396 10:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d8244ac4fb4be0788a7f6e9590348254 0 00:28:24.396 10:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d8244ac4fb4be0788a7f6e9590348254 0 00:28:24.396 10:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:24.396 10:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:24.396 10:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d8244ac4fb4be0788a7f6e9590348254 00:28:24.396 10:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:28:24.396 10:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:24.396 10:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.318 00:28:24.396 10:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.318 00:28:24.396 10:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.318 00:28:24.396 10:09:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:28:24.396 10:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:24.396 10:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:24.396 10:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:24.396 10:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:28:24.396 10:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:28:24.396 10:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:24.396 10:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5dd2f243d1a7fad6a009cbd8a0db700fe22b3031ec1ff5b719d1d6d2ed576302 00:28:24.396 10:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:28:24.396 10:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.dgA 00:28:24.396 10:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5dd2f243d1a7fad6a009cbd8a0db700fe22b3031ec1ff5b719d1d6d2ed576302 3 00:28:24.396 10:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5dd2f243d1a7fad6a009cbd8a0db700fe22b3031ec1ff5b719d1d6d2ed576302 3 00:28:24.396 10:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:24.396 10:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:24.396 10:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5dd2f243d1a7fad6a009cbd8a0db700fe22b3031ec1ff5b719d1d6d2ed576302 00:28:24.396 10:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:28:24.396 10:09:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:24.655 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.dgA 00:28:24.655 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.dgA 00:28:24.655 10:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.dgA 00:28:24.655 10:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:28:24.655 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:24.655 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:24.655 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:24.655 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:28:24.655 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:28:24.655 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:24.655 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6e25be07339efee4107747db11829e0466ca6c7df08fd523 00:28:24.655 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:28:24.655 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.5mw 00:28:24.655 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6e25be07339efee4107747db11829e0466ca6c7df08fd523 0 00:28:24.655 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6e25be07339efee4107747db11829e0466ca6c7df08fd523 0 00:28:24.655 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:24.655 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:24.655 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6e25be07339efee4107747db11829e0466ca6c7df08fd523 00:28:24.655 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:28:24.655 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:24.655 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.5mw 00:28:24.655 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.5mw 00:28:24.655 10:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.5mw 00:28:24.655 10:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:28:24.655 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:24.655 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:24.655 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:24.655 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:28:24.655 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:28:24.655 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:24.655 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=bce9af9a8296cc27f7897fc01cacaecef30da37e0816a636 00:28:24.655 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:28:24.655 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.RpG 00:28:24.655 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key bce9af9a8296cc27f7897fc01cacaecef30da37e0816a636 2 00:28:24.655 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 bce9af9a8296cc27f7897fc01cacaecef30da37e0816a636 2 00:28:24.655 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:24.655 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:24.655 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=bce9af9a8296cc27f7897fc01cacaecef30da37e0816a636 00:28:24.655 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:28:24.655 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:24.655 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.RpG 00:28:24.655 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.RpG 00:28:24.655 10:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.RpG 00:28:24.655 10:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:24.655 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:24.655 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:24.656 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:24.656 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:28:24.656 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:28:24.656 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:24.656 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=feabe7109dd8102490447d53cba2f43e 00:28:24.656 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:28:24.656 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.d3r 00:28:24.656 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key feabe7109dd8102490447d53cba2f43e 1 00:28:24.656 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 feabe7109dd8102490447d53cba2f43e 1 00:28:24.656 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:24.656 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:24.656 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=feabe7109dd8102490447d53cba2f43e 00:28:24.656 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:28:24.656 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:24.656 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.d3r 00:28:24.656 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.d3r 00:28:24.656 10:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.d3r 00:28:24.656 10:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:24.656 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:24.656 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:24.656 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:24.656 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:28:24.656 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:28:24.656 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:24.656 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=fc8827f2403e43c5f2d73664a3982be0 00:28:24.656 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.zAu 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key fc8827f2403e43c5f2d73664a3982be0 1 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 fc8827f2403e43c5f2d73664a3982be0 1 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=fc8827f2403e43c5f2d73664a3982be0 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.zAu 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.zAu 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.zAu 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8ab77bcd45a08ce14c49b8055341144a67202e2d0a78ba89 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.3uH 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8ab77bcd45a08ce14c49b8055341144a67202e2d0a78ba89 2 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8ab77bcd45a08ce14c49b8055341144a67202e2d0a78ba89 2 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8ab77bcd45a08ce14c49b8055341144a67202e2d0a78ba89 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.3uH 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.3uH 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.3uH 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=974bcc398698358c8af4c720f8ac0577 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.27O 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 974bcc398698358c8af4c720f8ac0577 0 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 974bcc398698358c8af4c720f8ac0577 0 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=974bcc398698358c8af4c720f8ac0577 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.27O 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.27O 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.27O 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=77b70310b271d2b5e345e095dcdf45822976c1fabb8015030907264631cc0ef1 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.HvZ 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 77b70310b271d2b5e345e095dcdf45822976c1fabb8015030907264631cc0ef1 3 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 77b70310b271d2b5e345e095dcdf45822976c1fabb8015030907264631cc0ef1 3 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=77b70310b271d2b5e345e095dcdf45822976c1fabb8015030907264631cc0ef1 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.HvZ 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.HvZ 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.HvZ 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 91431 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 91431 ']' 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:24.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:24.916 10:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.176 10:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:25.176 10:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:28:25.176 10:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:25.176 10:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.318 00:28:25.176 10:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.176 10:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.176 10:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.176 10:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.dgA ]] 00:28:25.176 10:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.dgA 00:28:25.176 10:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.176 10:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.176 10:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.176 10:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:25.176 10:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.5mw 00:28:25.176 10:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.176 10:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.176 10:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.176 10:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.RpG ]] 00:28:25.176 10:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.RpG 00:28:25.176 10:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.176 10:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.436 10:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.436 10:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:25.436 10:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.d3r 00:28:25.436 10:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.436 10:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.436 10:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.436 10:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.zAu ]] 00:28:25.436 10:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.zAu 00:28:25.436 10:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.436 10:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.436 10:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.436 10:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:25.436 10:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.3uH 00:28:25.436 10:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.436 10:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.436 10:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.436 10:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.27O ]] 00:28:25.436 10:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.27O 00:28:25.436 10:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.436 10:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.436 10:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.436 10:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:25.436 10:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.HvZ 00:28:25.436 10:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.436 10:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.436 10:09:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.436 10:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:28:25.436 10:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:28:25.436 10:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:28:25.436 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:25.436 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:25.436 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:25.436 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.436 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.436 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:25.436 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.436 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:25.436 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:25.436 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:25.436 10:09:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:28:25.436 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:28:25.436 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:28:25.436 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:25.436 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:25.436 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:25.436 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:28:25.436 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:28:25.436 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:28:25.436 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:25.436 10:09:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:26.004 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:26.004 Waiting for block devices as requested 00:28:26.004 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:26.004 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:26.941 10:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:28:26.941 10:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:26.941 10:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:28:26.941 10:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:28:26.941 10:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:26.941 10:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:28:26.941 10:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:28:26.941 10:09:40 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:28:26.941 10:09:40 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:28:26.941 No valid GPT data, bailing 00:28:26.941 10:09:40 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:26.941 10:09:40 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:28:26.941 10:09:40 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:28:26.941 10:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:28:26.941 10:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:28:26.941 10:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:28:26.941 10:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:28:26.941 10:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:28:26.941 10:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:28:26.941 10:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:28:26.941 10:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:28:26.941 10:09:40 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:28:26.941 10:09:40 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:28:26.941 No valid GPT data, bailing 00:28:26.941 10:09:40 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:28:26.941 10:09:40 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:28:26.941 10:09:40 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:28:26.941 10:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:28:26.941 10:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:28:26.941 10:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:28:26.941 10:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:28:26.941 10:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:28:26.941 10:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:28:26.941 10:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:28:26.941 10:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:28:26.941 10:09:40 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:28:26.941 10:09:40 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:28:26.941 No valid GPT data, bailing 00:28:26.941 10:09:40 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:28:26.941 10:09:40 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:28:26.941 10:09:40 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:28:26.941 10:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:28:26.941 10:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:28:26.941 10:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:28:26.941 10:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:28:26.941 10:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:28:26.941 10:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:28:26.941 10:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:28:26.941 10:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:28:26.941 10:09:40 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:28:26.941 10:09:40 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:28:27.201 No valid GPT data, bailing 00:28:27.201 10:09:40 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:28:27.201 10:09:40 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:28:27.201 10:09:40 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:28:27.201 10:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:28:27.201 10:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:28:27.201 10:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:27.201 10:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:27.201 10:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:27.201 10:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:28:27.201 10:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:28:27.201 10:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:28:27.201 10:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:28:27.201 10:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:28:27.201 10:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:28:27.201 10:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:28:27.201 10:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:28:27.201 10:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:27.201 10:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid=a2b6b25a-cc90-4aea-9f09-c06f8a634aec -a 10.0.0.1 -t tcp -s 4420 00:28:27.201 00:28:27.201 Discovery Log Number of Records 2, Generation counter 2 00:28:27.201 =====Discovery Log Entry 0====== 00:28:27.201 trtype: tcp 00:28:27.201 adrfam: ipv4 00:28:27.201 subtype: current discovery subsystem 00:28:27.201 treq: not specified, sq flow control disable supported 00:28:27.201 portid: 1 00:28:27.201 trsvcid: 4420 00:28:27.201 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:27.201 traddr: 10.0.0.1 00:28:27.201 eflags: none 00:28:27.201 sectype: none 00:28:27.201 =====Discovery Log Entry 1====== 00:28:27.201 trtype: tcp 00:28:27.201 adrfam: ipv4 00:28:27.201 subtype: nvme subsystem 00:28:27.201 treq: not specified, sq flow control disable supported 00:28:27.201 portid: 1 00:28:27.201 trsvcid: 4420 00:28:27.201 subnqn: nqn.2024-02.io.spdk:cnode0 00:28:27.201 traddr: 10.0.0.1 00:28:27.201 eflags: none 00:28:27.201 sectype: none 00:28:27.201 10:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:27.201 10:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:28:27.201 10:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:27.201 10:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:27.201 10:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:27.201 10:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:27.201 10:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:27.201 10:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:27.201 10:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmUyNWJlMDczMzllZmVlNDEwNzc0N2RiMTE4MjllMDQ2NmNhNmM3ZGYwOGZkNTIzL2wIVA==: 00:28:27.201 10:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmNlOWFmOWE4Mjk2Y2MyN2Y3ODk3ZmMwMWNhY2FlY2VmMzBkYTM3ZTA4MTZhNjM2yZBA8Q==: 00:28:27.201 10:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:27.201 10:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:27.201 10:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmUyNWJlMDczMzllZmVlNDEwNzc0N2RiMTE4MjllMDQ2NmNhNmM3ZGYwOGZkNTIzL2wIVA==: 00:28:27.201 10:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmNlOWFmOWE4Mjk2Y2MyN2Y3ODk3ZmMwMWNhY2FlY2VmMzBkYTM3ZTA4MTZhNjM2yZBA8Q==: ]] 00:28:27.201 10:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmNlOWFmOWE4Mjk2Y2MyN2Y3ODk3ZmMwMWNhY2FlY2VmMzBkYTM3ZTA4MTZhNjM2yZBA8Q==: 00:28:27.201 10:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:27.201 10:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:28:27.201 10:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:27.202 10:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:27.202 10:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:28:27.202 10:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:27.202 10:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:28:27.202 10:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:27.202 10:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:27.202 10:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:27.202 10:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:27.202 10:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.202 10:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.202 10:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.202 10:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:27.202 10:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:27.202 10:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:27.202 10:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:27.202 10:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:27.202 10:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:27.202 10:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:27.202 10:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:27.202 10:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:27.202 10:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:27.202 10:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:27.202 10:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:27.202 10:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.202 10:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.461 nvme0n1 00:28:27.461 10:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.461 10:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:27.461 10:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:27.461 10:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.461 10:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.461 10:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.461 10:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.461 10:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:27.461 10:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.461 10:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.461 10:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.461 10:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:27.461 10:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:27.461 10:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:27.461 10:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:28:27.461 10:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:27.461 10:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:27.461 10:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:27.461 10:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:27.461 10:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDgyNDRhYzRmYjRiZTA3ODhhN2Y2ZTk1OTAzNDgyNTRKWIDk: 00:28:27.461 10:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWRkMmYyNDNkMWE3ZmFkNmEwMDljYmQ4YTBkYjcwMGZlMjJiMzAzMWVjMWZmNWI3MTlkMWQ2ZDJlZDU3NjMwMgqpwkE=: 00:28:27.461 10:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:27.461 10:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:27.461 10:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDgyNDRhYzRmYjRiZTA3ODhhN2Y2ZTk1OTAzNDgyNTRKWIDk: 00:28:27.461 10:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWRkMmYyNDNkMWE3ZmFkNmEwMDljYmQ4YTBkYjcwMGZlMjJiMzAzMWVjMWZmNWI3MTlkMWQ2ZDJlZDU3NjMwMgqpwkE=: ]] 00:28:27.461 10:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWRkMmYyNDNkMWE3ZmFkNmEwMDljYmQ4YTBkYjcwMGZlMjJiMzAzMWVjMWZmNWI3MTlkMWQ2ZDJlZDU3NjMwMgqpwkE=: 00:28:27.461 10:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:28:27.461 10:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:27.461 10:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:27.461 10:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:27.461 10:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:27.461 10:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:27.461 10:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:27.461 10:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.461 10:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.461 10:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.461 10:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:27.461 10:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:27.461 10:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:27.461 10:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:27.461 10:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:27.461 10:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:27.461 10:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:27.461 10:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:27.461 10:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:27.461 10:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:27.461 10:09:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:27.461 10:09:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:27.461 10:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.461 10:09:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.720 nvme0n1 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmUyNWJlMDczMzllZmVlNDEwNzc0N2RiMTE4MjllMDQ2NmNhNmM3ZGYwOGZkNTIzL2wIVA==: 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmNlOWFmOWE4Mjk2Y2MyN2Y3ODk3ZmMwMWNhY2FlY2VmMzBkYTM3ZTA4MTZhNjM2yZBA8Q==: 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmUyNWJlMDczMzllZmVlNDEwNzc0N2RiMTE4MjllMDQ2NmNhNmM3ZGYwOGZkNTIzL2wIVA==: 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmNlOWFmOWE4Mjk2Y2MyN2Y3ODk3ZmMwMWNhY2FlY2VmMzBkYTM3ZTA4MTZhNjM2yZBA8Q==: ]] 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmNlOWFmOWE4Mjk2Y2MyN2Y3ODk3ZmMwMWNhY2FlY2VmMzBkYTM3ZTA4MTZhNjM2yZBA8Q==: 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.720 nvme0n1 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmVhYmU3MTA5ZGQ4MTAyNDkwNDQ3ZDUzY2JhMmY0M2UwyzRB: 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmM4ODI3ZjI0MDNlNDNjNWYyZDczNjY0YTM5ODJiZTBV5Kvy: 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:27.720 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:27.979 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmVhYmU3MTA5ZGQ4MTAyNDkwNDQ3ZDUzY2JhMmY0M2UwyzRB: 00:28:27.979 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmM4ODI3ZjI0MDNlNDNjNWYyZDczNjY0YTM5ODJiZTBV5Kvy: ]] 00:28:27.979 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmM4ODI3ZjI0MDNlNDNjNWYyZDczNjY0YTM5ODJiZTBV5Kvy: 00:28:27.979 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:28:27.979 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:27.979 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:27.979 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:27.979 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:27.979 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:27.979 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:27.979 10:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.979 10:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.979 10:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.979 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:27.979 10:09:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:27.979 10:09:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:27.979 10:09:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:27.979 10:09:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:27.979 10:09:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:27.979 10:09:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:27.979 10:09:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:27.979 10:09:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:27.979 10:09:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:27.979 10:09:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:27.979 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:27.979 10:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.979 10:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.979 nvme0n1 00:28:27.979 10:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.979 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:27.979 10:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.979 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:27.979 10:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.979 10:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.979 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.979 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:27.979 10:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.979 10:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.979 10:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.979 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:27.980 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:28:27.980 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:27.980 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:27.980 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:27.980 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:27.980 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGFiNzdiY2Q0NWEwOGNlMTRjNDliODA1NTM0MTE0NGE2NzIwMmUyZDBhNzhiYTg5You87w==: 00:28:27.980 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTc0YmNjMzk4Njk4MzU4YzhhZjRjNzIwZjhhYzA1NzdfNrbE: 00:28:27.980 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:27.980 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:27.980 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGFiNzdiY2Q0NWEwOGNlMTRjNDliODA1NTM0MTE0NGE2NzIwMmUyZDBhNzhiYTg5You87w==: 00:28:27.980 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTc0YmNjMzk4Njk4MzU4YzhhZjRjNzIwZjhhYzA1NzdfNrbE: ]] 00:28:27.980 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTc0YmNjMzk4Njk4MzU4YzhhZjRjNzIwZjhhYzA1NzdfNrbE: 00:28:27.980 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:28:27.980 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:27.980 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:27.980 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:27.980 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:27.980 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:27.980 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:27.980 10:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.980 10:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.980 10:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.980 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:27.980 10:09:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:27.980 10:09:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:27.980 10:09:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:27.980 10:09:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:27.980 10:09:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:27.980 10:09:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:27.980 10:09:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:27.980 10:09:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:27.980 10:09:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:27.980 10:09:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:27.980 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:27.980 10:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.980 10:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.240 nvme0n1 00:28:28.240 10:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.240 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.240 10:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.240 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:28.240 10:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.240 10:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.240 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.240 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:28.240 10:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.240 10:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.240 10:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.240 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:28.240 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:28:28.240 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.240 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:28.240 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:28.240 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:28.240 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzdiNzAzMTBiMjcxZDJiNWUzNDVlMDk1ZGNkZjQ1ODIyOTc2YzFmYWJiODAxNTAzMDkwNzI2NDYzMWNjMGVmMSqQF48=: 00:28:28.240 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:28.240 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:28.240 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:28.240 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzdiNzAzMTBiMjcxZDJiNWUzNDVlMDk1ZGNkZjQ1ODIyOTc2YzFmYWJiODAxNTAzMDkwNzI2NDYzMWNjMGVmMSqQF48=: 00:28:28.240 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:28.240 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:28:28.240 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:28.240 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:28.240 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:28.240 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:28.240 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:28.240 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:28.240 10:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.240 10:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.240 10:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.240 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:28.240 10:09:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:28.240 10:09:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:28.240 10:09:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:28.240 10:09:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.240 10:09:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.240 10:09:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:28.240 10:09:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.240 10:09:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:28.240 10:09:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:28.240 10:09:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:28.240 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:28.240 10:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.240 10:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.240 nvme0n1 00:28:28.240 10:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.240 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.240 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:28.240 10:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.240 10:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.240 10:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.240 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.240 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:28.240 10:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.240 10:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.500 10:09:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.500 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:28.500 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:28.500 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:28:28.500 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.500 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:28.500 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:28.500 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:28.500 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDgyNDRhYzRmYjRiZTA3ODhhN2Y2ZTk1OTAzNDgyNTRKWIDk: 00:28:28.500 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWRkMmYyNDNkMWE3ZmFkNmEwMDljYmQ4YTBkYjcwMGZlMjJiMzAzMWVjMWZmNWI3MTlkMWQ2ZDJlZDU3NjMwMgqpwkE=: 00:28:28.500 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:28.500 10:09:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:28.500 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDgyNDRhYzRmYjRiZTA3ODhhN2Y2ZTk1OTAzNDgyNTRKWIDk: 00:28:28.500 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWRkMmYyNDNkMWE3ZmFkNmEwMDljYmQ4YTBkYjcwMGZlMjJiMzAzMWVjMWZmNWI3MTlkMWQ2ZDJlZDU3NjMwMgqpwkE=: ]] 00:28:28.500 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWRkMmYyNDNkMWE3ZmFkNmEwMDljYmQ4YTBkYjcwMGZlMjJiMzAzMWVjMWZmNWI3MTlkMWQ2ZDJlZDU3NjMwMgqpwkE=: 00:28:28.500 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:28:28.500 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:28.500 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:28.500 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:28.500 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:28.500 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:28.500 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:28.500 10:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.500 10:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.500 10:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.500 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:28.500 10:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:28.500 10:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:28.500 10:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:28.500 10:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.500 10:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.500 10:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:28.500 10:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.500 10:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:28.500 10:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:28.500 10:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:28.500 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:28.500 10:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.500 10:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.761 nvme0n1 00:28:28.761 10:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.761 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.761 10:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.761 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:28.761 10:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.761 10:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.761 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.761 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:28.761 10:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.761 10:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.761 10:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.761 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:28.761 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:28:28.761 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.761 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:28.761 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:28.761 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:28.761 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmUyNWJlMDczMzllZmVlNDEwNzc0N2RiMTE4MjllMDQ2NmNhNmM3ZGYwOGZkNTIzL2wIVA==: 00:28:28.761 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmNlOWFmOWE4Mjk2Y2MyN2Y3ODk3ZmMwMWNhY2FlY2VmMzBkYTM3ZTA4MTZhNjM2yZBA8Q==: 00:28:28.761 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:28.761 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:28.761 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmUyNWJlMDczMzllZmVlNDEwNzc0N2RiMTE4MjllMDQ2NmNhNmM3ZGYwOGZkNTIzL2wIVA==: 00:28:28.761 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmNlOWFmOWE4Mjk2Y2MyN2Y3ODk3ZmMwMWNhY2FlY2VmMzBkYTM3ZTA4MTZhNjM2yZBA8Q==: ]] 00:28:28.761 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmNlOWFmOWE4Mjk2Y2MyN2Y3ODk3ZmMwMWNhY2FlY2VmMzBkYTM3ZTA4MTZhNjM2yZBA8Q==: 00:28:28.761 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:28:28.761 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:28.761 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:28.761 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:28.761 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:28.761 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:28.761 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:28.761 10:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.761 10:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.761 10:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.761 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:28.761 10:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:28.761 10:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:28.761 10:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:28.761 10:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.761 10:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.761 10:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:28.761 10:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.761 10:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:28.761 10:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:28.761 10:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:28.761 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:28.761 10:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.761 10:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.020 nvme0n1 00:28:29.020 10:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.020 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.020 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.020 10:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.020 10:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.020 10:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.020 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.020 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.020 10:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.020 10:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.020 10:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.020 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.020 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:28:29.020 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.020 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:29.020 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:29.020 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:29.020 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmVhYmU3MTA5ZGQ4MTAyNDkwNDQ3ZDUzY2JhMmY0M2UwyzRB: 00:28:29.020 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmM4ODI3ZjI0MDNlNDNjNWYyZDczNjY0YTM5ODJiZTBV5Kvy: 00:28:29.020 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:29.020 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:29.020 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmVhYmU3MTA5ZGQ4MTAyNDkwNDQ3ZDUzY2JhMmY0M2UwyzRB: 00:28:29.020 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmM4ODI3ZjI0MDNlNDNjNWYyZDczNjY0YTM5ODJiZTBV5Kvy: ]] 00:28:29.020 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmM4ODI3ZjI0MDNlNDNjNWYyZDczNjY0YTM5ODJiZTBV5Kvy: 00:28:29.020 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:28:29.020 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.020 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:29.020 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:29.020 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:29.020 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.020 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:29.021 10:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.021 10:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.021 10:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.021 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.021 10:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:29.021 10:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:29.021 10:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:29.021 10:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.021 10:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.021 10:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:29.021 10:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.021 10:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:29.021 10:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:29.021 10:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:29.021 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:29.021 10:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.021 10:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.021 nvme0n1 00:28:29.021 10:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.021 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.021 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.021 10:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.021 10:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.021 10:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.279 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.279 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.279 10:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.279 10:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.279 10:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.279 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.279 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:28:29.279 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.279 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:29.279 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:29.279 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:29.279 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGFiNzdiY2Q0NWEwOGNlMTRjNDliODA1NTM0MTE0NGE2NzIwMmUyZDBhNzhiYTg5You87w==: 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTc0YmNjMzk4Njk4MzU4YzhhZjRjNzIwZjhhYzA1NzdfNrbE: 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGFiNzdiY2Q0NWEwOGNlMTRjNDliODA1NTM0MTE0NGE2NzIwMmUyZDBhNzhiYTg5You87w==: 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTc0YmNjMzk4Njk4MzU4YzhhZjRjNzIwZjhhYzA1NzdfNrbE: ]] 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTc0YmNjMzk4Njk4MzU4YzhhZjRjNzIwZjhhYzA1NzdfNrbE: 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.280 nvme0n1 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzdiNzAzMTBiMjcxZDJiNWUzNDVlMDk1ZGNkZjQ1ODIyOTc2YzFmYWJiODAxNTAzMDkwNzI2NDYzMWNjMGVmMSqQF48=: 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzdiNzAzMTBiMjcxZDJiNWUzNDVlMDk1ZGNkZjQ1ODIyOTc2YzFmYWJiODAxNTAzMDkwNzI2NDYzMWNjMGVmMSqQF48=: 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.280 10:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.539 nvme0n1 00:28:29.539 10:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.539 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.539 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.539 10:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.539 10:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.539 10:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.539 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.539 10:09:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.539 10:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.539 10:09:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.539 10:09:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.539 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:29.539 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.539 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:28:29.539 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.539 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:29.539 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:29.539 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:29.539 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDgyNDRhYzRmYjRiZTA3ODhhN2Y2ZTk1OTAzNDgyNTRKWIDk: 00:28:29.539 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWRkMmYyNDNkMWE3ZmFkNmEwMDljYmQ4YTBkYjcwMGZlMjJiMzAzMWVjMWZmNWI3MTlkMWQ2ZDJlZDU3NjMwMgqpwkE=: 00:28:29.539 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:29.539 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:30.107 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDgyNDRhYzRmYjRiZTA3ODhhN2Y2ZTk1OTAzNDgyNTRKWIDk: 00:28:30.107 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWRkMmYyNDNkMWE3ZmFkNmEwMDljYmQ4YTBkYjcwMGZlMjJiMzAzMWVjMWZmNWI3MTlkMWQ2ZDJlZDU3NjMwMgqpwkE=: ]] 00:28:30.107 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWRkMmYyNDNkMWE3ZmFkNmEwMDljYmQ4YTBkYjcwMGZlMjJiMzAzMWVjMWZmNWI3MTlkMWQ2ZDJlZDU3NjMwMgqpwkE=: 00:28:30.107 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:28:30.107 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.107 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:30.107 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:30.107 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:30.107 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.107 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:30.107 10:09:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.107 10:09:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.107 10:09:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.107 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.107 10:09:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:30.107 10:09:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:30.107 10:09:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:30.107 10:09:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.107 10:09:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.107 10:09:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:30.107 10:09:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.107 10:09:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:30.107 10:09:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:30.107 10:09:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:30.107 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:30.107 10:09:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.107 10:09:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.107 nvme0n1 00:28:30.107 10:09:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.107 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.107 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.107 10:09:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.107 10:09:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.107 10:09:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.107 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.107 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.107 10:09:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.366 10:09:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.366 10:09:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.366 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.366 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:28:30.366 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.366 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:30.366 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:30.366 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:30.366 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmUyNWJlMDczMzllZmVlNDEwNzc0N2RiMTE4MjllMDQ2NmNhNmM3ZGYwOGZkNTIzL2wIVA==: 00:28:30.366 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmNlOWFmOWE4Mjk2Y2MyN2Y3ODk3ZmMwMWNhY2FlY2VmMzBkYTM3ZTA4MTZhNjM2yZBA8Q==: 00:28:30.366 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:30.366 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:30.366 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmUyNWJlMDczMzllZmVlNDEwNzc0N2RiMTE4MjllMDQ2NmNhNmM3ZGYwOGZkNTIzL2wIVA==: 00:28:30.366 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmNlOWFmOWE4Mjk2Y2MyN2Y3ODk3ZmMwMWNhY2FlY2VmMzBkYTM3ZTA4MTZhNjM2yZBA8Q==: ]] 00:28:30.366 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmNlOWFmOWE4Mjk2Y2MyN2Y3ODk3ZmMwMWNhY2FlY2VmMzBkYTM3ZTA4MTZhNjM2yZBA8Q==: 00:28:30.366 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:28:30.366 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.366 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:30.366 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:30.366 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:30.366 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.366 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:30.366 10:09:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.366 10:09:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.366 10:09:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.366 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.366 10:09:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:30.366 10:09:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:30.366 10:09:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:30.366 10:09:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.366 10:09:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.366 10:09:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:30.366 10:09:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.367 10:09:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:30.367 10:09:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:30.367 10:09:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:30.367 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:30.367 10:09:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.367 10:09:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.367 nvme0n1 00:28:30.367 10:09:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.367 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.367 10:09:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.367 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.367 10:09:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.367 10:09:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.367 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.367 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.367 10:09:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.367 10:09:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.626 10:09:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.626 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.626 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:28:30.626 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.626 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:30.626 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:30.626 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:30.626 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmVhYmU3MTA5ZGQ4MTAyNDkwNDQ3ZDUzY2JhMmY0M2UwyzRB: 00:28:30.626 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmM4ODI3ZjI0MDNlNDNjNWYyZDczNjY0YTM5ODJiZTBV5Kvy: 00:28:30.626 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:30.626 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:30.626 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmVhYmU3MTA5ZGQ4MTAyNDkwNDQ3ZDUzY2JhMmY0M2UwyzRB: 00:28:30.626 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmM4ODI3ZjI0MDNlNDNjNWYyZDczNjY0YTM5ODJiZTBV5Kvy: ]] 00:28:30.626 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmM4ODI3ZjI0MDNlNDNjNWYyZDczNjY0YTM5ODJiZTBV5Kvy: 00:28:30.626 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:28:30.626 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.626 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:30.626 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:30.626 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:30.626 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.626 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:30.626 10:09:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.626 10:09:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.626 10:09:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.626 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.626 10:09:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:30.626 10:09:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:30.626 10:09:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:30.626 10:09:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.626 10:09:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.626 10:09:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:30.626 10:09:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.626 10:09:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:30.626 10:09:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:30.626 10:09:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:30.626 10:09:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:30.626 10:09:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.626 10:09:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.626 nvme0n1 00:28:30.626 10:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.626 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.626 10:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.626 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.626 10:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.626 10:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.626 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.626 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.626 10:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.626 10:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.626 10:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.626 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.626 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:28:30.626 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.626 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:30.626 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:30.626 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:30.626 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGFiNzdiY2Q0NWEwOGNlMTRjNDliODA1NTM0MTE0NGE2NzIwMmUyZDBhNzhiYTg5You87w==: 00:28:30.626 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTc0YmNjMzk4Njk4MzU4YzhhZjRjNzIwZjhhYzA1NzdfNrbE: 00:28:30.626 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:30.626 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:30.626 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGFiNzdiY2Q0NWEwOGNlMTRjNDliODA1NTM0MTE0NGE2NzIwMmUyZDBhNzhiYTg5You87w==: 00:28:30.626 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTc0YmNjMzk4Njk4MzU4YzhhZjRjNzIwZjhhYzA1NzdfNrbE: ]] 00:28:30.626 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTc0YmNjMzk4Njk4MzU4YzhhZjRjNzIwZjhhYzA1NzdfNrbE: 00:28:30.626 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:28:30.626 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.626 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.886 nvme0n1 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzdiNzAzMTBiMjcxZDJiNWUzNDVlMDk1ZGNkZjQ1ODIyOTc2YzFmYWJiODAxNTAzMDkwNzI2NDYzMWNjMGVmMSqQF48=: 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzdiNzAzMTBiMjcxZDJiNWUzNDVlMDk1ZGNkZjQ1ODIyOTc2YzFmYWJiODAxNTAzMDkwNzI2NDYzMWNjMGVmMSqQF48=: 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.886 10:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.145 nvme0n1 00:28:31.145 10:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.145 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:31.145 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:31.145 10:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.145 10:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.145 10:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.145 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.145 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:31.145 10:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.145 10:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.145 10:09:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.145 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:31.145 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:31.145 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:28:31.145 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:31.145 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:31.145 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:31.145 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:31.145 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDgyNDRhYzRmYjRiZTA3ODhhN2Y2ZTk1OTAzNDgyNTRKWIDk: 00:28:31.145 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWRkMmYyNDNkMWE3ZmFkNmEwMDljYmQ4YTBkYjcwMGZlMjJiMzAzMWVjMWZmNWI3MTlkMWQ2ZDJlZDU3NjMwMgqpwkE=: 00:28:31.145 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:31.145 10:09:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:32.523 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDgyNDRhYzRmYjRiZTA3ODhhN2Y2ZTk1OTAzNDgyNTRKWIDk: 00:28:32.523 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWRkMmYyNDNkMWE3ZmFkNmEwMDljYmQ4YTBkYjcwMGZlMjJiMzAzMWVjMWZmNWI3MTlkMWQ2ZDJlZDU3NjMwMgqpwkE=: ]] 00:28:32.523 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWRkMmYyNDNkMWE3ZmFkNmEwMDljYmQ4YTBkYjcwMGZlMjJiMzAzMWVjMWZmNWI3MTlkMWQ2ZDJlZDU3NjMwMgqpwkE=: 00:28:32.523 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:28:32.523 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:32.523 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:32.523 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:32.523 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:32.523 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:32.523 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:32.523 10:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.523 10:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.523 10:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.523 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:32.523 10:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:32.523 10:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:32.523 10:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:32.523 10:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:32.523 10:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:32.523 10:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:32.523 10:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:32.523 10:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:32.523 10:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:32.523 10:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:32.523 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:32.523 10:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.523 10:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.783 nvme0n1 00:28:32.783 10:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.783 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:32.783 10:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.783 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:32.783 10:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.783 10:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.783 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:32.783 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:32.783 10:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.783 10:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.043 10:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.043 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:33.043 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:28:33.043 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:33.043 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:33.043 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:33.043 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:33.043 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmUyNWJlMDczMzllZmVlNDEwNzc0N2RiMTE4MjllMDQ2NmNhNmM3ZGYwOGZkNTIzL2wIVA==: 00:28:33.043 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmNlOWFmOWE4Mjk2Y2MyN2Y3ODk3ZmMwMWNhY2FlY2VmMzBkYTM3ZTA4MTZhNjM2yZBA8Q==: 00:28:33.043 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:33.043 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:33.043 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmUyNWJlMDczMzllZmVlNDEwNzc0N2RiMTE4MjllMDQ2NmNhNmM3ZGYwOGZkNTIzL2wIVA==: 00:28:33.043 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmNlOWFmOWE4Mjk2Y2MyN2Y3ODk3ZmMwMWNhY2FlY2VmMzBkYTM3ZTA4MTZhNjM2yZBA8Q==: ]] 00:28:33.043 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmNlOWFmOWE4Mjk2Y2MyN2Y3ODk3ZmMwMWNhY2FlY2VmMzBkYTM3ZTA4MTZhNjM2yZBA8Q==: 00:28:33.043 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:28:33.043 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:33.043 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:33.043 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:33.043 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:33.043 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:33.043 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:33.043 10:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.043 10:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.043 10:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.043 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:33.043 10:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:33.043 10:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:33.043 10:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:33.043 10:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:33.043 10:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:33.043 10:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:33.043 10:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:33.043 10:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:33.043 10:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:33.043 10:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:33.043 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:33.043 10:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.043 10:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.303 nvme0n1 00:28:33.303 10:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.303 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:33.303 10:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.303 10:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.303 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:33.303 10:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.303 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:33.303 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:33.303 10:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.303 10:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.303 10:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.303 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:33.303 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:28:33.303 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:33.303 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:33.303 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:33.303 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:33.303 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmVhYmU3MTA5ZGQ4MTAyNDkwNDQ3ZDUzY2JhMmY0M2UwyzRB: 00:28:33.303 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmM4ODI3ZjI0MDNlNDNjNWYyZDczNjY0YTM5ODJiZTBV5Kvy: 00:28:33.303 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:33.303 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:33.303 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmVhYmU3MTA5ZGQ4MTAyNDkwNDQ3ZDUzY2JhMmY0M2UwyzRB: 00:28:33.303 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmM4ODI3ZjI0MDNlNDNjNWYyZDczNjY0YTM5ODJiZTBV5Kvy: ]] 00:28:33.303 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmM4ODI3ZjI0MDNlNDNjNWYyZDczNjY0YTM5ODJiZTBV5Kvy: 00:28:33.303 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:28:33.303 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:33.303 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:33.303 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:33.303 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:33.303 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:33.303 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:33.303 10:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.303 10:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.304 10:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.304 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:33.304 10:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:33.304 10:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:33.304 10:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:33.304 10:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:33.304 10:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:33.304 10:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:33.304 10:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:33.304 10:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:33.304 10:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:33.304 10:09:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:33.304 10:09:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:33.304 10:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.304 10:09:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.573 nvme0n1 00:28:33.573 10:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.573 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:33.573 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:33.573 10:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.573 10:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.573 10:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.573 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:33.573 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:33.573 10:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.573 10:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.573 10:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.573 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:33.573 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:28:33.573 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:33.573 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:33.573 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:33.573 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:33.573 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGFiNzdiY2Q0NWEwOGNlMTRjNDliODA1NTM0MTE0NGE2NzIwMmUyZDBhNzhiYTg5You87w==: 00:28:33.573 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTc0YmNjMzk4Njk4MzU4YzhhZjRjNzIwZjhhYzA1NzdfNrbE: 00:28:33.573 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:33.573 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:33.573 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGFiNzdiY2Q0NWEwOGNlMTRjNDliODA1NTM0MTE0NGE2NzIwMmUyZDBhNzhiYTg5You87w==: 00:28:33.573 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTc0YmNjMzk4Njk4MzU4YzhhZjRjNzIwZjhhYzA1NzdfNrbE: ]] 00:28:33.573 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTc0YmNjMzk4Njk4MzU4YzhhZjRjNzIwZjhhYzA1NzdfNrbE: 00:28:33.573 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:28:33.573 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:33.573 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:33.573 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:33.573 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:33.573 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:33.573 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:33.573 10:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.573 10:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.573 10:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.573 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:33.573 10:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:33.573 10:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:33.573 10:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:33.573 10:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:33.573 10:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:33.573 10:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:33.573 10:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:33.573 10:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:33.573 10:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:33.573 10:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:33.573 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:33.573 10:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.574 10:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.855 nvme0n1 00:28:33.855 10:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.115 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.115 10:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.115 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.115 10:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.115 10:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.115 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.115 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.115 10:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.115 10:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.115 10:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.115 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:34.115 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:28:34.115 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.115 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:34.115 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:34.115 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:34.115 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzdiNzAzMTBiMjcxZDJiNWUzNDVlMDk1ZGNkZjQ1ODIyOTc2YzFmYWJiODAxNTAzMDkwNzI2NDYzMWNjMGVmMSqQF48=: 00:28:34.115 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:34.115 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:34.115 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:34.115 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzdiNzAzMTBiMjcxZDJiNWUzNDVlMDk1ZGNkZjQ1ODIyOTc2YzFmYWJiODAxNTAzMDkwNzI2NDYzMWNjMGVmMSqQF48=: 00:28:34.115 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:34.115 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:28:34.115 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.115 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:34.115 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:34.115 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:34.115 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.115 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:34.115 10:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.115 10:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.115 10:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.115 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.115 10:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:34.115 10:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:34.115 10:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:34.115 10:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.115 10:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.115 10:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:34.115 10:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.115 10:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:34.115 10:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:34.115 10:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:34.115 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:34.115 10:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.115 10:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.378 nvme0n1 00:28:34.378 10:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.378 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.378 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.378 10:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.378 10:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.378 10:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.378 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.378 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.378 10:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.378 10:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.378 10:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.378 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:34.378 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:34.378 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:28:34.378 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.378 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:34.378 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:34.378 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:34.378 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDgyNDRhYzRmYjRiZTA3ODhhN2Y2ZTk1OTAzNDgyNTRKWIDk: 00:28:34.378 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWRkMmYyNDNkMWE3ZmFkNmEwMDljYmQ4YTBkYjcwMGZlMjJiMzAzMWVjMWZmNWI3MTlkMWQ2ZDJlZDU3NjMwMgqpwkE=: 00:28:34.378 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:34.378 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:34.378 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDgyNDRhYzRmYjRiZTA3ODhhN2Y2ZTk1OTAzNDgyNTRKWIDk: 00:28:34.378 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWRkMmYyNDNkMWE3ZmFkNmEwMDljYmQ4YTBkYjcwMGZlMjJiMzAzMWVjMWZmNWI3MTlkMWQ2ZDJlZDU3NjMwMgqpwkE=: ]] 00:28:34.378 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWRkMmYyNDNkMWE3ZmFkNmEwMDljYmQ4YTBkYjcwMGZlMjJiMzAzMWVjMWZmNWI3MTlkMWQ2ZDJlZDU3NjMwMgqpwkE=: 00:28:34.378 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:28:34.378 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.378 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:34.378 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:34.378 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:34.378 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.378 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:34.378 10:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.378 10:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.378 10:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.378 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.378 10:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:34.378 10:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:34.378 10:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:34.378 10:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.378 10:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.378 10:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:34.378 10:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.378 10:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:34.378 10:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:34.378 10:09:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:34.378 10:09:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:34.378 10:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.378 10:09:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.945 nvme0n1 00:28:34.945 10:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.945 10:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.945 10:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.945 10:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.945 10:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.945 10:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.945 10:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.945 10:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.945 10:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.945 10:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.945 10:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.945 10:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:34.946 10:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:28:34.946 10:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.946 10:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:34.946 10:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:34.946 10:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:34.946 10:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmUyNWJlMDczMzllZmVlNDEwNzc0N2RiMTE4MjllMDQ2NmNhNmM3ZGYwOGZkNTIzL2wIVA==: 00:28:34.946 10:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmNlOWFmOWE4Mjk2Y2MyN2Y3ODk3ZmMwMWNhY2FlY2VmMzBkYTM3ZTA4MTZhNjM2yZBA8Q==: 00:28:34.946 10:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:34.946 10:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:34.946 10:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmUyNWJlMDczMzllZmVlNDEwNzc0N2RiMTE4MjllMDQ2NmNhNmM3ZGYwOGZkNTIzL2wIVA==: 00:28:34.946 10:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmNlOWFmOWE4Mjk2Y2MyN2Y3ODk3ZmMwMWNhY2FlY2VmMzBkYTM3ZTA4MTZhNjM2yZBA8Q==: ]] 00:28:34.946 10:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmNlOWFmOWE4Mjk2Y2MyN2Y3ODk3ZmMwMWNhY2FlY2VmMzBkYTM3ZTA4MTZhNjM2yZBA8Q==: 00:28:34.946 10:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:28:34.946 10:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.946 10:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:34.946 10:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:34.946 10:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:34.946 10:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.946 10:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:34.946 10:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.946 10:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.946 10:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.946 10:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.946 10:09:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:34.946 10:09:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:34.946 10:09:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:34.946 10:09:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.946 10:09:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.946 10:09:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:34.946 10:09:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.946 10:09:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:34.946 10:09:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:34.946 10:09:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:34.946 10:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:34.946 10:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.946 10:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.513 nvme0n1 00:28:35.513 10:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.513 10:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.513 10:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.513 10:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.513 10:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:35.513 10:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.513 10:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.513 10:09:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.513 10:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.513 10:09:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.513 10:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.513 10:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:35.513 10:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:28:35.513 10:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:35.513 10:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:35.513 10:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:35.513 10:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:35.513 10:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmVhYmU3MTA5ZGQ4MTAyNDkwNDQ3ZDUzY2JhMmY0M2UwyzRB: 00:28:35.513 10:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmM4ODI3ZjI0MDNlNDNjNWYyZDczNjY0YTM5ODJiZTBV5Kvy: 00:28:35.513 10:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:35.513 10:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:35.513 10:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmVhYmU3MTA5ZGQ4MTAyNDkwNDQ3ZDUzY2JhMmY0M2UwyzRB: 00:28:35.513 10:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmM4ODI3ZjI0MDNlNDNjNWYyZDczNjY0YTM5ODJiZTBV5Kvy: ]] 00:28:35.513 10:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmM4ODI3ZjI0MDNlNDNjNWYyZDczNjY0YTM5ODJiZTBV5Kvy: 00:28:35.513 10:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:28:35.513 10:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:35.513 10:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:35.513 10:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:35.513 10:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:35.513 10:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:35.513 10:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:35.513 10:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.513 10:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.513 10:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.513 10:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:35.513 10:09:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:35.513 10:09:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:35.513 10:09:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:35.513 10:09:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.513 10:09:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.514 10:09:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:35.514 10:09:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.514 10:09:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:35.514 10:09:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:35.514 10:09:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:35.514 10:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:35.514 10:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.514 10:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.082 nvme0n1 00:28:36.082 10:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.082 10:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:36.082 10:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:36.082 10:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.082 10:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.082 10:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.082 10:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:36.082 10:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:36.082 10:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.082 10:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.082 10:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.082 10:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:36.082 10:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:28:36.082 10:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:36.082 10:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:36.082 10:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:36.082 10:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:36.082 10:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGFiNzdiY2Q0NWEwOGNlMTRjNDliODA1NTM0MTE0NGE2NzIwMmUyZDBhNzhiYTg5You87w==: 00:28:36.082 10:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTc0YmNjMzk4Njk4MzU4YzhhZjRjNzIwZjhhYzA1NzdfNrbE: 00:28:36.082 10:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:36.082 10:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:36.082 10:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGFiNzdiY2Q0NWEwOGNlMTRjNDliODA1NTM0MTE0NGE2NzIwMmUyZDBhNzhiYTg5You87w==: 00:28:36.082 10:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTc0YmNjMzk4Njk4MzU4YzhhZjRjNzIwZjhhYzA1NzdfNrbE: ]] 00:28:36.082 10:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTc0YmNjMzk4Njk4MzU4YzhhZjRjNzIwZjhhYzA1NzdfNrbE: 00:28:36.082 10:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:28:36.082 10:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:36.082 10:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:36.082 10:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:36.082 10:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:36.082 10:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:36.082 10:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:36.082 10:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.082 10:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.083 10:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.083 10:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:36.083 10:09:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:36.083 10:09:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:36.083 10:09:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:36.083 10:09:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:36.083 10:09:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:36.083 10:09:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:36.083 10:09:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:36.083 10:09:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:36.083 10:09:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:36.083 10:09:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:36.083 10:09:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:36.083 10:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.083 10:09:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.649 nvme0n1 00:28:36.650 10:09:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.650 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:36.650 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:36.650 10:09:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.650 10:09:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.650 10:09:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.650 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:36.650 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:36.650 10:09:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.650 10:09:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.650 10:09:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.650 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:36.650 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:28:36.650 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:36.650 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:36.650 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:36.650 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:36.650 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzdiNzAzMTBiMjcxZDJiNWUzNDVlMDk1ZGNkZjQ1ODIyOTc2YzFmYWJiODAxNTAzMDkwNzI2NDYzMWNjMGVmMSqQF48=: 00:28:36.650 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:36.650 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:36.650 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:36.650 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzdiNzAzMTBiMjcxZDJiNWUzNDVlMDk1ZGNkZjQ1ODIyOTc2YzFmYWJiODAxNTAzMDkwNzI2NDYzMWNjMGVmMSqQF48=: 00:28:36.650 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:36.650 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:28:36.650 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:36.650 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:36.650 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:36.650 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:36.650 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:36.650 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:36.650 10:09:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.650 10:09:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.650 10:09:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.650 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:36.650 10:09:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:36.650 10:09:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:36.650 10:09:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:36.650 10:09:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:36.650 10:09:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:36.650 10:09:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:36.650 10:09:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:36.650 10:09:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:36.650 10:09:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:36.650 10:09:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:36.650 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:36.650 10:09:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.650 10:09:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.216 nvme0n1 00:28:37.216 10:09:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.216 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.216 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:37.216 10:09:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.216 10:09:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.216 10:09:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.216 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:37.216 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:37.216 10:09:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.216 10:09:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.216 10:09:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.216 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:37.216 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:37.216 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:37.216 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:28:37.216 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:37.216 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:37.216 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:37.216 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:37.216 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDgyNDRhYzRmYjRiZTA3ODhhN2Y2ZTk1OTAzNDgyNTRKWIDk: 00:28:37.216 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWRkMmYyNDNkMWE3ZmFkNmEwMDljYmQ4YTBkYjcwMGZlMjJiMzAzMWVjMWZmNWI3MTlkMWQ2ZDJlZDU3NjMwMgqpwkE=: 00:28:37.216 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:37.216 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:37.216 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDgyNDRhYzRmYjRiZTA3ODhhN2Y2ZTk1OTAzNDgyNTRKWIDk: 00:28:37.216 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWRkMmYyNDNkMWE3ZmFkNmEwMDljYmQ4YTBkYjcwMGZlMjJiMzAzMWVjMWZmNWI3MTlkMWQ2ZDJlZDU3NjMwMgqpwkE=: ]] 00:28:37.216 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWRkMmYyNDNkMWE3ZmFkNmEwMDljYmQ4YTBkYjcwMGZlMjJiMzAzMWVjMWZmNWI3MTlkMWQ2ZDJlZDU3NjMwMgqpwkE=: 00:28:37.216 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:28:37.216 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:37.216 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:37.216 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:37.216 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:37.216 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:37.216 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:37.216 10:09:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.216 10:09:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.216 10:09:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.216 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:37.216 10:09:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:37.216 10:09:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:37.216 10:09:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:37.216 10:09:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.216 10:09:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.216 10:09:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:37.216 10:09:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.216 10:09:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:37.216 10:09:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:37.216 10:09:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:37.216 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:37.216 10:09:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.216 10:09:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.475 nvme0n1 00:28:37.475 10:09:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.475 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.475 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:37.475 10:09:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.475 10:09:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.475 10:09:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.475 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:37.475 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:37.475 10:09:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.475 10:09:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.475 10:09:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.475 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:37.475 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:28:37.475 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:37.475 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:37.475 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:37.475 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:37.475 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmUyNWJlMDczMzllZmVlNDEwNzc0N2RiMTE4MjllMDQ2NmNhNmM3ZGYwOGZkNTIzL2wIVA==: 00:28:37.475 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmNlOWFmOWE4Mjk2Y2MyN2Y3ODk3ZmMwMWNhY2FlY2VmMzBkYTM3ZTA4MTZhNjM2yZBA8Q==: 00:28:37.475 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:37.475 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:37.475 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmUyNWJlMDczMzllZmVlNDEwNzc0N2RiMTE4MjllMDQ2NmNhNmM3ZGYwOGZkNTIzL2wIVA==: 00:28:37.475 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmNlOWFmOWE4Mjk2Y2MyN2Y3ODk3ZmMwMWNhY2FlY2VmMzBkYTM3ZTA4MTZhNjM2yZBA8Q==: ]] 00:28:37.475 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmNlOWFmOWE4Mjk2Y2MyN2Y3ODk3ZmMwMWNhY2FlY2VmMzBkYTM3ZTA4MTZhNjM2yZBA8Q==: 00:28:37.475 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:28:37.475 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:37.475 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:37.475 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:37.475 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:37.475 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:37.475 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:37.475 10:09:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.475 10:09:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.475 10:09:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.475 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:37.475 10:09:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:37.475 10:09:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:37.475 10:09:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:37.475 10:09:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.475 10:09:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.475 10:09:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:37.475 10:09:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.475 10:09:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:37.475 10:09:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:37.475 10:09:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:37.476 10:09:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:37.476 10:09:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.476 10:09:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.476 nvme0n1 00:28:37.476 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.476 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.476 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:37.476 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.476 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.735 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.735 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmVhYmU3MTA5ZGQ4MTAyNDkwNDQ3ZDUzY2JhMmY0M2UwyzRB: 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmM4ODI3ZjI0MDNlNDNjNWYyZDczNjY0YTM5ODJiZTBV5Kvy: 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmVhYmU3MTA5ZGQ4MTAyNDkwNDQ3ZDUzY2JhMmY0M2UwyzRB: 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmM4ODI3ZjI0MDNlNDNjNWYyZDczNjY0YTM5ODJiZTBV5Kvy: ]] 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmM4ODI3ZjI0MDNlNDNjNWYyZDczNjY0YTM5ODJiZTBV5Kvy: 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.736 nvme0n1 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGFiNzdiY2Q0NWEwOGNlMTRjNDliODA1NTM0MTE0NGE2NzIwMmUyZDBhNzhiYTg5You87w==: 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTc0YmNjMzk4Njk4MzU4YzhhZjRjNzIwZjhhYzA1NzdfNrbE: 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGFiNzdiY2Q0NWEwOGNlMTRjNDliODA1NTM0MTE0NGE2NzIwMmUyZDBhNzhiYTg5You87w==: 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTc0YmNjMzk4Njk4MzU4YzhhZjRjNzIwZjhhYzA1NzdfNrbE: ]] 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTc0YmNjMzk4Njk4MzU4YzhhZjRjNzIwZjhhYzA1NzdfNrbE: 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.736 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.996 nvme0n1 00:28:37.996 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.996 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.996 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:37.996 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.996 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.996 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.996 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:37.996 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:37.996 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.996 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.996 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.996 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:37.997 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:28:37.997 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:37.997 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:37.997 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:37.997 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:37.997 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzdiNzAzMTBiMjcxZDJiNWUzNDVlMDk1ZGNkZjQ1ODIyOTc2YzFmYWJiODAxNTAzMDkwNzI2NDYzMWNjMGVmMSqQF48=: 00:28:37.997 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:37.997 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:37.997 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:37.997 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzdiNzAzMTBiMjcxZDJiNWUzNDVlMDk1ZGNkZjQ1ODIyOTc2YzFmYWJiODAxNTAzMDkwNzI2NDYzMWNjMGVmMSqQF48=: 00:28:37.997 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:37.997 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:28:37.997 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:37.997 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:37.997 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:37.997 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:37.997 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:37.997 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:37.997 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.997 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.997 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.997 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:37.997 10:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:37.997 10:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:37.997 10:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:37.997 10:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.997 10:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.997 10:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:37.997 10:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.997 10:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:37.997 10:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:37.997 10:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:37.997 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:37.997 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.997 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.257 nvme0n1 00:28:38.257 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.257 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:38.257 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.257 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:38.257 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.257 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.257 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:38.257 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:38.257 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.257 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.257 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.257 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:38.257 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:38.257 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:28:38.257 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:38.257 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:38.257 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:38.257 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:38.257 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDgyNDRhYzRmYjRiZTA3ODhhN2Y2ZTk1OTAzNDgyNTRKWIDk: 00:28:38.257 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWRkMmYyNDNkMWE3ZmFkNmEwMDljYmQ4YTBkYjcwMGZlMjJiMzAzMWVjMWZmNWI3MTlkMWQ2ZDJlZDU3NjMwMgqpwkE=: 00:28:38.257 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:38.257 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:38.257 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDgyNDRhYzRmYjRiZTA3ODhhN2Y2ZTk1OTAzNDgyNTRKWIDk: 00:28:38.257 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWRkMmYyNDNkMWE3ZmFkNmEwMDljYmQ4YTBkYjcwMGZlMjJiMzAzMWVjMWZmNWI3MTlkMWQ2ZDJlZDU3NjMwMgqpwkE=: ]] 00:28:38.257 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWRkMmYyNDNkMWE3ZmFkNmEwMDljYmQ4YTBkYjcwMGZlMjJiMzAzMWVjMWZmNWI3MTlkMWQ2ZDJlZDU3NjMwMgqpwkE=: 00:28:38.257 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:28:38.257 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:38.257 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:38.257 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:38.257 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:38.257 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:38.257 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:38.257 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.257 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.257 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.257 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:38.257 10:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:38.257 10:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:38.257 10:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:38.257 10:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:38.257 10:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:38.257 10:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:38.257 10:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:38.257 10:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:38.257 10:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:38.257 10:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:38.257 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:38.257 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.257 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.257 nvme0n1 00:28:38.257 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.257 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:38.257 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:38.257 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.257 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.257 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.257 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:38.257 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:38.257 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.257 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.517 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.518 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:38.518 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:28:38.518 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:38.518 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:38.518 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:38.518 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:38.518 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmUyNWJlMDczMzllZmVlNDEwNzc0N2RiMTE4MjllMDQ2NmNhNmM3ZGYwOGZkNTIzL2wIVA==: 00:28:38.518 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmNlOWFmOWE4Mjk2Y2MyN2Y3ODk3ZmMwMWNhY2FlY2VmMzBkYTM3ZTA4MTZhNjM2yZBA8Q==: 00:28:38.518 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:38.518 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:38.518 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmUyNWJlMDczMzllZmVlNDEwNzc0N2RiMTE4MjllMDQ2NmNhNmM3ZGYwOGZkNTIzL2wIVA==: 00:28:38.518 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmNlOWFmOWE4Mjk2Y2MyN2Y3ODk3ZmMwMWNhY2FlY2VmMzBkYTM3ZTA4MTZhNjM2yZBA8Q==: ]] 00:28:38.518 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmNlOWFmOWE4Mjk2Y2MyN2Y3ODk3ZmMwMWNhY2FlY2VmMzBkYTM3ZTA4MTZhNjM2yZBA8Q==: 00:28:38.518 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:28:38.518 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:38.518 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:38.518 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:38.518 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:38.518 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:38.518 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:38.518 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.518 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.518 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.518 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:38.518 10:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:38.518 10:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:38.518 10:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:38.518 10:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:38.518 10:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:38.518 10:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:38.518 10:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:38.518 10:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:38.518 10:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:38.518 10:09:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:38.518 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:38.518 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.518 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.518 nvme0n1 00:28:38.518 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.518 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:38.518 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.518 10:09:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:38.518 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.518 10:09:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.518 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:38.518 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:38.518 10:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.518 10:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.518 10:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.518 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:38.518 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:28:38.518 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:38.518 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:38.518 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:38.518 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:38.518 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmVhYmU3MTA5ZGQ4MTAyNDkwNDQ3ZDUzY2JhMmY0M2UwyzRB: 00:28:38.518 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmM4ODI3ZjI0MDNlNDNjNWYyZDczNjY0YTM5ODJiZTBV5Kvy: 00:28:38.518 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:38.518 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:38.518 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmVhYmU3MTA5ZGQ4MTAyNDkwNDQ3ZDUzY2JhMmY0M2UwyzRB: 00:28:38.518 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmM4ODI3ZjI0MDNlNDNjNWYyZDczNjY0YTM5ODJiZTBV5Kvy: ]] 00:28:38.518 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmM4ODI3ZjI0MDNlNDNjNWYyZDczNjY0YTM5ODJiZTBV5Kvy: 00:28:38.518 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:28:38.518 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:38.518 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:38.518 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:38.518 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:38.518 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:38.518 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:38.518 10:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.518 10:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.518 10:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.518 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:38.518 10:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:38.518 10:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:38.518 10:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:38.518 10:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:38.518 10:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:38.518 10:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:38.518 10:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:38.518 10:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:38.518 10:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:38.518 10:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:38.518 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:38.518 10:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.518 10:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.778 nvme0n1 00:28:38.778 10:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.778 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:38.778 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:38.778 10:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.778 10:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.778 10:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.778 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:38.778 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:38.778 10:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.778 10:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.778 10:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.778 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:38.778 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:28:38.778 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:38.778 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:38.778 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:38.778 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:38.778 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGFiNzdiY2Q0NWEwOGNlMTRjNDliODA1NTM0MTE0NGE2NzIwMmUyZDBhNzhiYTg5You87w==: 00:28:38.778 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTc0YmNjMzk4Njk4MzU4YzhhZjRjNzIwZjhhYzA1NzdfNrbE: 00:28:38.778 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:38.778 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:38.778 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGFiNzdiY2Q0NWEwOGNlMTRjNDliODA1NTM0MTE0NGE2NzIwMmUyZDBhNzhiYTg5You87w==: 00:28:38.778 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTc0YmNjMzk4Njk4MzU4YzhhZjRjNzIwZjhhYzA1NzdfNrbE: ]] 00:28:38.778 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTc0YmNjMzk4Njk4MzU4YzhhZjRjNzIwZjhhYzA1NzdfNrbE: 00:28:38.778 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:28:38.778 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:38.778 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:38.778 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:38.778 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:38.778 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:38.778 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:38.778 10:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.778 10:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.778 10:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.778 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:38.778 10:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:38.778 10:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:38.778 10:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:38.778 10:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:38.778 10:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:38.778 10:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:38.778 10:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:38.778 10:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:38.778 10:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:38.778 10:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:38.778 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:38.778 10:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.778 10:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.039 nvme0n1 00:28:39.039 10:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.039 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.039 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:39.039 10:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.039 10:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.039 10:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.039 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:39.039 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:39.039 10:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.039 10:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.039 10:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.039 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:39.039 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:28:39.039 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:39.039 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:39.039 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:39.039 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:39.039 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzdiNzAzMTBiMjcxZDJiNWUzNDVlMDk1ZGNkZjQ1ODIyOTc2YzFmYWJiODAxNTAzMDkwNzI2NDYzMWNjMGVmMSqQF48=: 00:28:39.039 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:39.039 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:39.039 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:39.039 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzdiNzAzMTBiMjcxZDJiNWUzNDVlMDk1ZGNkZjQ1ODIyOTc2YzFmYWJiODAxNTAzMDkwNzI2NDYzMWNjMGVmMSqQF48=: 00:28:39.039 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:39.039 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:28:39.039 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:39.039 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:39.039 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:39.039 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:39.039 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:39.039 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:39.039 10:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.039 10:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.039 10:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.039 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:39.039 10:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:39.039 10:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:39.039 10:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:39.039 10:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:39.039 10:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:39.039 10:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:39.039 10:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:39.039 10:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:39.039 10:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:39.039 10:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:39.039 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:39.039 10:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.039 10:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.039 nvme0n1 00:28:39.039 10:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.039 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:39.039 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.039 10:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.039 10:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.039 10:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.300 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:39.300 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:39.300 10:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.300 10:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.300 10:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.300 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:39.300 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:39.300 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:28:39.300 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:39.300 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:39.300 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:39.300 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:39.300 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDgyNDRhYzRmYjRiZTA3ODhhN2Y2ZTk1OTAzNDgyNTRKWIDk: 00:28:39.300 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWRkMmYyNDNkMWE3ZmFkNmEwMDljYmQ4YTBkYjcwMGZlMjJiMzAzMWVjMWZmNWI3MTlkMWQ2ZDJlZDU3NjMwMgqpwkE=: 00:28:39.300 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:39.300 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:39.300 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDgyNDRhYzRmYjRiZTA3ODhhN2Y2ZTk1OTAzNDgyNTRKWIDk: 00:28:39.300 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWRkMmYyNDNkMWE3ZmFkNmEwMDljYmQ4YTBkYjcwMGZlMjJiMzAzMWVjMWZmNWI3MTlkMWQ2ZDJlZDU3NjMwMgqpwkE=: ]] 00:28:39.300 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWRkMmYyNDNkMWE3ZmFkNmEwMDljYmQ4YTBkYjcwMGZlMjJiMzAzMWVjMWZmNWI3MTlkMWQ2ZDJlZDU3NjMwMgqpwkE=: 00:28:39.300 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:28:39.300 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:39.300 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:39.300 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:39.300 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:39.300 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:39.300 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:39.300 10:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.300 10:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.300 10:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.300 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:39.300 10:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:39.300 10:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:39.300 10:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:39.300 10:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:39.300 10:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:39.300 10:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:39.300 10:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:39.300 10:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:39.300 10:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:39.300 10:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:39.300 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:39.300 10:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.300 10:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.300 nvme0n1 00:28:39.300 10:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.300 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.300 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:39.300 10:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.300 10:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.300 10:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.300 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:39.300 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:39.300 10:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.300 10:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.560 10:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.560 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:39.560 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:28:39.560 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:39.560 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:39.560 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:39.560 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:39.560 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmUyNWJlMDczMzllZmVlNDEwNzc0N2RiMTE4MjllMDQ2NmNhNmM3ZGYwOGZkNTIzL2wIVA==: 00:28:39.560 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmNlOWFmOWE4Mjk2Y2MyN2Y3ODk3ZmMwMWNhY2FlY2VmMzBkYTM3ZTA4MTZhNjM2yZBA8Q==: 00:28:39.560 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:39.560 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:39.560 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmUyNWJlMDczMzllZmVlNDEwNzc0N2RiMTE4MjllMDQ2NmNhNmM3ZGYwOGZkNTIzL2wIVA==: 00:28:39.560 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmNlOWFmOWE4Mjk2Y2MyN2Y3ODk3ZmMwMWNhY2FlY2VmMzBkYTM3ZTA4MTZhNjM2yZBA8Q==: ]] 00:28:39.560 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmNlOWFmOWE4Mjk2Y2MyN2Y3ODk3ZmMwMWNhY2FlY2VmMzBkYTM3ZTA4MTZhNjM2yZBA8Q==: 00:28:39.560 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:28:39.560 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:39.560 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:39.560 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:39.560 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:39.560 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:39.560 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:39.560 10:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.560 10:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.560 10:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.560 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:39.560 10:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:39.560 10:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:39.560 10:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:39.560 10:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:39.560 10:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:39.560 10:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:39.560 10:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:39.560 10:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:39.560 10:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:39.560 10:09:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:39.560 10:09:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:39.560 10:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.560 10:09:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.561 nvme0n1 00:28:39.561 10:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.561 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.561 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:39.561 10:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.561 10:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.561 10:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.561 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:39.561 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:39.561 10:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.561 10:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.561 10:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.561 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:39.561 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:28:39.561 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:39.561 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:39.561 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:39.561 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:39.561 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmVhYmU3MTA5ZGQ4MTAyNDkwNDQ3ZDUzY2JhMmY0M2UwyzRB: 00:28:39.561 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmM4ODI3ZjI0MDNlNDNjNWYyZDczNjY0YTM5ODJiZTBV5Kvy: 00:28:39.561 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:39.561 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:39.561 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmVhYmU3MTA5ZGQ4MTAyNDkwNDQ3ZDUzY2JhMmY0M2UwyzRB: 00:28:39.561 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmM4ODI3ZjI0MDNlNDNjNWYyZDczNjY0YTM5ODJiZTBV5Kvy: ]] 00:28:39.561 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmM4ODI3ZjI0MDNlNDNjNWYyZDczNjY0YTM5ODJiZTBV5Kvy: 00:28:39.561 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:28:39.561 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:39.561 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:39.561 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:39.561 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:39.561 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:39.561 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:39.561 10:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.561 10:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.830 10:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.830 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:39.830 10:09:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:39.830 10:09:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:39.830 10:09:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:39.830 10:09:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:39.830 10:09:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:39.830 10:09:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:39.830 10:09:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:39.830 10:09:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:39.830 10:09:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:39.830 10:09:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:39.830 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:39.830 10:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.830 10:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.830 nvme0n1 00:28:39.830 10:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.830 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.830 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:39.830 10:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.830 10:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.830 10:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.830 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:39.830 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:39.830 10:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.830 10:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.830 10:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.830 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:39.830 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:28:39.830 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:39.830 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:39.830 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:39.830 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:39.831 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGFiNzdiY2Q0NWEwOGNlMTRjNDliODA1NTM0MTE0NGE2NzIwMmUyZDBhNzhiYTg5You87w==: 00:28:39.831 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTc0YmNjMzk4Njk4MzU4YzhhZjRjNzIwZjhhYzA1NzdfNrbE: 00:28:39.831 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:39.831 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:39.831 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGFiNzdiY2Q0NWEwOGNlMTRjNDliODA1NTM0MTE0NGE2NzIwMmUyZDBhNzhiYTg5You87w==: 00:28:39.831 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTc0YmNjMzk4Njk4MzU4YzhhZjRjNzIwZjhhYzA1NzdfNrbE: ]] 00:28:39.831 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTc0YmNjMzk4Njk4MzU4YzhhZjRjNzIwZjhhYzA1NzdfNrbE: 00:28:39.831 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:28:39.831 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:39.831 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:39.831 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:39.831 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:39.831 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:39.831 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:39.831 10:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.831 10:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.831 10:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.831 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:39.831 10:09:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:39.831 10:09:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:39.831 10:09:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:39.831 10:09:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:39.831 10:09:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:39.831 10:09:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:39.831 10:09:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:39.831 10:09:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:39.831 10:09:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:39.831 10:09:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:39.831 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:39.831 10:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.831 10:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.097 nvme0n1 00:28:40.097 10:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.097 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.097 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.097 10:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.097 10:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.097 10:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.097 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.097 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.097 10:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.097 10:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.097 10:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.097 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.097 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:28:40.097 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.097 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:40.097 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:40.097 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:40.097 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzdiNzAzMTBiMjcxZDJiNWUzNDVlMDk1ZGNkZjQ1ODIyOTc2YzFmYWJiODAxNTAzMDkwNzI2NDYzMWNjMGVmMSqQF48=: 00:28:40.097 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:40.097 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:40.097 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:40.097 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzdiNzAzMTBiMjcxZDJiNWUzNDVlMDk1ZGNkZjQ1ODIyOTc2YzFmYWJiODAxNTAzMDkwNzI2NDYzMWNjMGVmMSqQF48=: 00:28:40.097 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:40.097 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:28:40.097 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.097 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:40.097 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:40.097 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:40.097 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.097 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:40.097 10:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.097 10:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.097 10:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.097 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.097 10:09:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:40.097 10:09:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:40.097 10:09:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:40.097 10:09:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.097 10:09:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.097 10:09:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:40.097 10:09:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.097 10:09:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:40.097 10:09:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:40.097 10:09:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:40.097 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:40.097 10:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.097 10:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.357 nvme0n1 00:28:40.357 10:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.357 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.357 10:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.357 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.357 10:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.357 10:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.357 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.357 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.357 10:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.357 10:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.357 10:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.357 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:40.357 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.357 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:28:40.357 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.357 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:40.357 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:40.357 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:40.357 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDgyNDRhYzRmYjRiZTA3ODhhN2Y2ZTk1OTAzNDgyNTRKWIDk: 00:28:40.357 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWRkMmYyNDNkMWE3ZmFkNmEwMDljYmQ4YTBkYjcwMGZlMjJiMzAzMWVjMWZmNWI3MTlkMWQ2ZDJlZDU3NjMwMgqpwkE=: 00:28:40.357 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:40.357 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:40.357 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDgyNDRhYzRmYjRiZTA3ODhhN2Y2ZTk1OTAzNDgyNTRKWIDk: 00:28:40.357 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWRkMmYyNDNkMWE3ZmFkNmEwMDljYmQ4YTBkYjcwMGZlMjJiMzAzMWVjMWZmNWI3MTlkMWQ2ZDJlZDU3NjMwMgqpwkE=: ]] 00:28:40.357 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWRkMmYyNDNkMWE3ZmFkNmEwMDljYmQ4YTBkYjcwMGZlMjJiMzAzMWVjMWZmNWI3MTlkMWQ2ZDJlZDU3NjMwMgqpwkE=: 00:28:40.357 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:28:40.357 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.357 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:40.357 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:40.357 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:40.357 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.357 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:40.357 10:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.357 10:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.357 10:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.357 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.357 10:09:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:40.357 10:09:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:40.357 10:09:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:40.357 10:09:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.357 10:09:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.357 10:09:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:40.357 10:09:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.357 10:09:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:40.357 10:09:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:40.357 10:09:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:40.357 10:09:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:40.357 10:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.357 10:09:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.926 nvme0n1 00:28:40.926 10:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.926 10:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.926 10:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.926 10:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.926 10:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.926 10:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.926 10:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.926 10:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.926 10:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.926 10:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.926 10:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.926 10:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.926 10:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:28:40.926 10:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.926 10:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:40.926 10:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:40.926 10:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:40.926 10:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmUyNWJlMDczMzllZmVlNDEwNzc0N2RiMTE4MjllMDQ2NmNhNmM3ZGYwOGZkNTIzL2wIVA==: 00:28:40.926 10:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmNlOWFmOWE4Mjk2Y2MyN2Y3ODk3ZmMwMWNhY2FlY2VmMzBkYTM3ZTA4MTZhNjM2yZBA8Q==: 00:28:40.926 10:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:40.926 10:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:40.926 10:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmUyNWJlMDczMzllZmVlNDEwNzc0N2RiMTE4MjllMDQ2NmNhNmM3ZGYwOGZkNTIzL2wIVA==: 00:28:40.926 10:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmNlOWFmOWE4Mjk2Y2MyN2Y3ODk3ZmMwMWNhY2FlY2VmMzBkYTM3ZTA4MTZhNjM2yZBA8Q==: ]] 00:28:40.926 10:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmNlOWFmOWE4Mjk2Y2MyN2Y3ODk3ZmMwMWNhY2FlY2VmMzBkYTM3ZTA4MTZhNjM2yZBA8Q==: 00:28:40.926 10:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:28:40.926 10:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.926 10:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:40.926 10:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:40.926 10:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:40.926 10:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.926 10:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:40.926 10:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.926 10:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.926 10:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.926 10:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.926 10:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:40.926 10:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:40.926 10:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:40.926 10:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.926 10:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.926 10:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:40.926 10:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.926 10:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:40.927 10:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:40.927 10:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:40.927 10:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:40.927 10:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.927 10:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.188 nvme0n1 00:28:41.188 10:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.188 10:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.188 10:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.188 10:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.188 10:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.188 10:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.188 10:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.188 10:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.188 10:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.188 10:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.188 10:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.188 10:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.188 10:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:28:41.188 10:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.188 10:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:41.188 10:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:41.188 10:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:41.188 10:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmVhYmU3MTA5ZGQ4MTAyNDkwNDQ3ZDUzY2JhMmY0M2UwyzRB: 00:28:41.188 10:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmM4ODI3ZjI0MDNlNDNjNWYyZDczNjY0YTM5ODJiZTBV5Kvy: 00:28:41.188 10:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:41.188 10:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:41.188 10:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmVhYmU3MTA5ZGQ4MTAyNDkwNDQ3ZDUzY2JhMmY0M2UwyzRB: 00:28:41.188 10:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmM4ODI3ZjI0MDNlNDNjNWYyZDczNjY0YTM5ODJiZTBV5Kvy: ]] 00:28:41.188 10:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmM4ODI3ZjI0MDNlNDNjNWYyZDczNjY0YTM5ODJiZTBV5Kvy: 00:28:41.188 10:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:28:41.188 10:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.188 10:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:41.188 10:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:41.188 10:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:41.188 10:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.188 10:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:41.188 10:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.188 10:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.188 10:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.188 10:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.188 10:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:41.188 10:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:41.188 10:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:41.188 10:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.188 10:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.188 10:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:41.188 10:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.188 10:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:41.188 10:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:41.188 10:09:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:41.188 10:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:41.188 10:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.188 10:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.484 nvme0n1 00:28:41.484 10:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.484 10:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.484 10:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.484 10:09:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.484 10:09:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.484 10:09:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.484 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.485 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.485 10:09:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.485 10:09:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.485 10:09:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.485 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.485 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:28:41.485 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.485 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:41.485 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:41.485 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:41.485 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGFiNzdiY2Q0NWEwOGNlMTRjNDliODA1NTM0MTE0NGE2NzIwMmUyZDBhNzhiYTg5You87w==: 00:28:41.485 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTc0YmNjMzk4Njk4MzU4YzhhZjRjNzIwZjhhYzA1NzdfNrbE: 00:28:41.485 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:41.485 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:41.485 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGFiNzdiY2Q0NWEwOGNlMTRjNDliODA1NTM0MTE0NGE2NzIwMmUyZDBhNzhiYTg5You87w==: 00:28:41.485 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTc0YmNjMzk4Njk4MzU4YzhhZjRjNzIwZjhhYzA1NzdfNrbE: ]] 00:28:41.485 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTc0YmNjMzk4Njk4MzU4YzhhZjRjNzIwZjhhYzA1NzdfNrbE: 00:28:41.485 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:28:41.485 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.485 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:41.485 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:41.485 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:41.485 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.485 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:41.485 10:09:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.485 10:09:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.485 10:09:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.485 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.485 10:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:41.485 10:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:41.745 10:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:41.745 10:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.745 10:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.745 10:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:41.745 10:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.745 10:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:41.745 10:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:41.745 10:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:41.745 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:41.745 10:09:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.745 10:09:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.005 nvme0n1 00:28:42.005 10:09:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.005 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.005 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.005 10:09:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.005 10:09:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.005 10:09:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.005 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.005 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.005 10:09:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.005 10:09:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.005 10:09:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.005 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.005 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:28:42.005 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.005 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:42.005 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:42.005 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:42.005 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzdiNzAzMTBiMjcxZDJiNWUzNDVlMDk1ZGNkZjQ1ODIyOTc2YzFmYWJiODAxNTAzMDkwNzI2NDYzMWNjMGVmMSqQF48=: 00:28:42.005 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:42.005 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:42.005 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:42.005 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzdiNzAzMTBiMjcxZDJiNWUzNDVlMDk1ZGNkZjQ1ODIyOTc2YzFmYWJiODAxNTAzMDkwNzI2NDYzMWNjMGVmMSqQF48=: 00:28:42.005 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:42.005 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:28:42.005 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.005 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:42.005 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:42.005 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:42.005 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.005 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:42.005 10:09:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.005 10:09:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.005 10:09:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.005 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.005 10:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:42.005 10:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:42.005 10:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:42.005 10:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.005 10:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.005 10:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:42.005 10:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.005 10:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:42.005 10:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:42.005 10:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:42.005 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:42.005 10:09:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.005 10:09:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.265 nvme0n1 00:28:42.265 10:09:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.265 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.265 10:09:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.265 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.265 10:09:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.265 10:09:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.265 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.265 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.265 10:09:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.265 10:09:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.265 10:09:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.265 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:42.265 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.265 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:28:42.265 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.265 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:42.265 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:42.265 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:42.265 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDgyNDRhYzRmYjRiZTA3ODhhN2Y2ZTk1OTAzNDgyNTRKWIDk: 00:28:42.265 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWRkMmYyNDNkMWE3ZmFkNmEwMDljYmQ4YTBkYjcwMGZlMjJiMzAzMWVjMWZmNWI3MTlkMWQ2ZDJlZDU3NjMwMgqpwkE=: 00:28:42.265 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:42.265 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:42.265 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDgyNDRhYzRmYjRiZTA3ODhhN2Y2ZTk1OTAzNDgyNTRKWIDk: 00:28:42.265 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWRkMmYyNDNkMWE3ZmFkNmEwMDljYmQ4YTBkYjcwMGZlMjJiMzAzMWVjMWZmNWI3MTlkMWQ2ZDJlZDU3NjMwMgqpwkE=: ]] 00:28:42.265 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWRkMmYyNDNkMWE3ZmFkNmEwMDljYmQ4YTBkYjcwMGZlMjJiMzAzMWVjMWZmNWI3MTlkMWQ2ZDJlZDU3NjMwMgqpwkE=: 00:28:42.265 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:28:42.265 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.265 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:42.265 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:42.265 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:42.265 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.265 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:42.265 10:09:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.265 10:09:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.265 10:09:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.265 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.265 10:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:42.265 10:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:42.265 10:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:42.265 10:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.265 10:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.265 10:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:42.265 10:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.265 10:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:42.265 10:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:42.265 10:09:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:42.265 10:09:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:42.265 10:09:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.265 10:09:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.834 nvme0n1 00:28:42.834 10:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.834 10:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.834 10:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.834 10:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.834 10:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.834 10:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.834 10:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.834 10:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.834 10:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.834 10:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.834 10:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.834 10:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.834 10:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:28:42.834 10:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.834 10:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:42.834 10:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:42.834 10:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:42.834 10:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmUyNWJlMDczMzllZmVlNDEwNzc0N2RiMTE4MjllMDQ2NmNhNmM3ZGYwOGZkNTIzL2wIVA==: 00:28:42.834 10:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmNlOWFmOWE4Mjk2Y2MyN2Y3ODk3ZmMwMWNhY2FlY2VmMzBkYTM3ZTA4MTZhNjM2yZBA8Q==: 00:28:42.834 10:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:42.834 10:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:42.834 10:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmUyNWJlMDczMzllZmVlNDEwNzc0N2RiMTE4MjllMDQ2NmNhNmM3ZGYwOGZkNTIzL2wIVA==: 00:28:42.834 10:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmNlOWFmOWE4Mjk2Y2MyN2Y3ODk3ZmMwMWNhY2FlY2VmMzBkYTM3ZTA4MTZhNjM2yZBA8Q==: ]] 00:28:42.834 10:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmNlOWFmOWE4Mjk2Y2MyN2Y3ODk3ZmMwMWNhY2FlY2VmMzBkYTM3ZTA4MTZhNjM2yZBA8Q==: 00:28:42.834 10:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:28:42.834 10:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.834 10:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:42.834 10:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:42.834 10:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:42.834 10:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.834 10:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:42.834 10:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.834 10:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.834 10:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.834 10:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.834 10:09:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:42.834 10:09:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:42.834 10:09:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:42.834 10:09:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.834 10:09:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.834 10:09:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:42.834 10:09:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.834 10:09:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:42.834 10:09:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:42.834 10:09:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:42.835 10:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:42.835 10:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.835 10:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.404 nvme0n1 00:28:43.404 10:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.404 10:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:43.404 10:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.404 10:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:43.404 10:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.404 10:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.404 10:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:43.404 10:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:43.404 10:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.404 10:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.404 10:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.404 10:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:43.404 10:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:28:43.404 10:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:43.404 10:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:43.404 10:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:43.404 10:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:43.404 10:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmVhYmU3MTA5ZGQ4MTAyNDkwNDQ3ZDUzY2JhMmY0M2UwyzRB: 00:28:43.404 10:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmM4ODI3ZjI0MDNlNDNjNWYyZDczNjY0YTM5ODJiZTBV5Kvy: 00:28:43.404 10:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:43.404 10:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:43.404 10:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmVhYmU3MTA5ZGQ4MTAyNDkwNDQ3ZDUzY2JhMmY0M2UwyzRB: 00:28:43.404 10:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmM4ODI3ZjI0MDNlNDNjNWYyZDczNjY0YTM5ODJiZTBV5Kvy: ]] 00:28:43.404 10:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmM4ODI3ZjI0MDNlNDNjNWYyZDczNjY0YTM5ODJiZTBV5Kvy: 00:28:43.404 10:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:28:43.404 10:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:43.404 10:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:43.404 10:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:43.404 10:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:43.404 10:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:43.404 10:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:43.404 10:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.404 10:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.404 10:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.664 10:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:43.664 10:09:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:43.664 10:09:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:43.664 10:09:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:43.664 10:09:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:43.664 10:09:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:43.664 10:09:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:43.664 10:09:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:43.664 10:09:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:43.664 10:09:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:43.664 10:09:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:43.664 10:09:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:43.664 10:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.664 10:09:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.924 nvme0n1 00:28:43.924 10:09:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.924 10:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:43.924 10:09:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.924 10:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:43.924 10:09:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.184 10:09:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.184 10:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:44.184 10:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:44.184 10:09:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.184 10:09:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.184 10:09:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.184 10:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:44.184 10:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:28:44.184 10:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:44.184 10:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:44.184 10:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:44.184 10:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:44.184 10:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGFiNzdiY2Q0NWEwOGNlMTRjNDliODA1NTM0MTE0NGE2NzIwMmUyZDBhNzhiYTg5You87w==: 00:28:44.184 10:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTc0YmNjMzk4Njk4MzU4YzhhZjRjNzIwZjhhYzA1NzdfNrbE: 00:28:44.184 10:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:44.184 10:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:44.184 10:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGFiNzdiY2Q0NWEwOGNlMTRjNDliODA1NTM0MTE0NGE2NzIwMmUyZDBhNzhiYTg5You87w==: 00:28:44.184 10:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTc0YmNjMzk4Njk4MzU4YzhhZjRjNzIwZjhhYzA1NzdfNrbE: ]] 00:28:44.184 10:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTc0YmNjMzk4Njk4MzU4YzhhZjRjNzIwZjhhYzA1NzdfNrbE: 00:28:44.184 10:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:28:44.184 10:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:44.184 10:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:44.184 10:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:44.184 10:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:44.184 10:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:44.184 10:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:44.184 10:09:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.184 10:09:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.184 10:09:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.184 10:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:44.184 10:09:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:44.184 10:09:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:44.184 10:09:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:44.184 10:09:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:44.184 10:09:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:44.184 10:09:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:44.184 10:09:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:44.184 10:09:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:44.184 10:09:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:44.184 10:09:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:44.184 10:09:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:44.184 10:09:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.184 10:09:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.755 nvme0n1 00:28:44.755 10:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.755 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:44.755 10:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.755 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:44.755 10:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.755 10:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.755 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:44.755 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:44.755 10:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.755 10:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.755 10:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.755 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:44.755 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:28:44.755 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:44.755 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:44.755 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:44.755 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:44.755 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzdiNzAzMTBiMjcxZDJiNWUzNDVlMDk1ZGNkZjQ1ODIyOTc2YzFmYWJiODAxNTAzMDkwNzI2NDYzMWNjMGVmMSqQF48=: 00:28:44.755 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:44.755 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:44.755 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:44.755 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzdiNzAzMTBiMjcxZDJiNWUzNDVlMDk1ZGNkZjQ1ODIyOTc2YzFmYWJiODAxNTAzMDkwNzI2NDYzMWNjMGVmMSqQF48=: 00:28:44.755 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:44.755 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:28:44.755 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:44.755 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:44.755 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:44.755 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:44.755 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:44.755 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:44.755 10:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.755 10:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.755 10:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.755 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:44.755 10:09:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:44.755 10:09:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:44.755 10:09:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:44.755 10:09:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:44.755 10:09:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:44.755 10:09:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:44.755 10:09:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:44.755 10:09:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:44.755 10:09:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:44.755 10:09:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:44.755 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:44.755 10:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.755 10:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.325 nvme0n1 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDgyNDRhYzRmYjRiZTA3ODhhN2Y2ZTk1OTAzNDgyNTRKWIDk: 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWRkMmYyNDNkMWE3ZmFkNmEwMDljYmQ4YTBkYjcwMGZlMjJiMzAzMWVjMWZmNWI3MTlkMWQ2ZDJlZDU3NjMwMgqpwkE=: 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDgyNDRhYzRmYjRiZTA3ODhhN2Y2ZTk1OTAzNDgyNTRKWIDk: 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWRkMmYyNDNkMWE3ZmFkNmEwMDljYmQ4YTBkYjcwMGZlMjJiMzAzMWVjMWZmNWI3MTlkMWQ2ZDJlZDU3NjMwMgqpwkE=: ]] 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWRkMmYyNDNkMWE3ZmFkNmEwMDljYmQ4YTBkYjcwMGZlMjJiMzAzMWVjMWZmNWI3MTlkMWQ2ZDJlZDU3NjMwMgqpwkE=: 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.326 nvme0n1 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmUyNWJlMDczMzllZmVlNDEwNzc0N2RiMTE4MjllMDQ2NmNhNmM3ZGYwOGZkNTIzL2wIVA==: 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmNlOWFmOWE4Mjk2Y2MyN2Y3ODk3ZmMwMWNhY2FlY2VmMzBkYTM3ZTA4MTZhNjM2yZBA8Q==: 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmUyNWJlMDczMzllZmVlNDEwNzc0N2RiMTE4MjllMDQ2NmNhNmM3ZGYwOGZkNTIzL2wIVA==: 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmNlOWFmOWE4Mjk2Y2MyN2Y3ODk3ZmMwMWNhY2FlY2VmMzBkYTM3ZTA4MTZhNjM2yZBA8Q==: ]] 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmNlOWFmOWE4Mjk2Y2MyN2Y3ODk3ZmMwMWNhY2FlY2VmMzBkYTM3ZTA4MTZhNjM2yZBA8Q==: 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.326 10:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.586 10:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.586 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:45.586 10:09:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:45.586 10:09:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:45.586 10:09:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:45.586 10:09:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:45.586 10:09:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:45.586 10:09:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:45.586 10:09:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:45.586 10:09:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:45.586 10:09:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:45.586 10:09:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:45.586 10:09:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:45.586 10:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.586 10:09:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.586 nvme0n1 00:28:45.586 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.586 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:45.586 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.586 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:45.586 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.586 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.586 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:45.586 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:45.586 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.586 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.586 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.586 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:45.586 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:28:45.586 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:45.586 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:45.586 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:45.586 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:45.586 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmVhYmU3MTA5ZGQ4MTAyNDkwNDQ3ZDUzY2JhMmY0M2UwyzRB: 00:28:45.586 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmM4ODI3ZjI0MDNlNDNjNWYyZDczNjY0YTM5ODJiZTBV5Kvy: 00:28:45.586 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:45.586 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:45.586 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmVhYmU3MTA5ZGQ4MTAyNDkwNDQ3ZDUzY2JhMmY0M2UwyzRB: 00:28:45.586 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmM4ODI3ZjI0MDNlNDNjNWYyZDczNjY0YTM5ODJiZTBV5Kvy: ]] 00:28:45.586 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmM4ODI3ZjI0MDNlNDNjNWYyZDczNjY0YTM5ODJiZTBV5Kvy: 00:28:45.586 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:28:45.586 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:45.587 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:45.587 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:45.587 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:45.587 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:45.587 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:45.587 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.587 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.587 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.587 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:45.587 10:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:45.587 10:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:45.587 10:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:45.587 10:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:45.587 10:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:45.587 10:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:45.587 10:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:45.587 10:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:45.587 10:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:45.587 10:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:45.587 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:45.587 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.587 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.847 nvme0n1 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGFiNzdiY2Q0NWEwOGNlMTRjNDliODA1NTM0MTE0NGE2NzIwMmUyZDBhNzhiYTg5You87w==: 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTc0YmNjMzk4Njk4MzU4YzhhZjRjNzIwZjhhYzA1NzdfNrbE: 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGFiNzdiY2Q0NWEwOGNlMTRjNDliODA1NTM0MTE0NGE2NzIwMmUyZDBhNzhiYTg5You87w==: 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTc0YmNjMzk4Njk4MzU4YzhhZjRjNzIwZjhhYzA1NzdfNrbE: ]] 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTc0YmNjMzk4Njk4MzU4YzhhZjRjNzIwZjhhYzA1NzdfNrbE: 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.847 nvme0n1 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzdiNzAzMTBiMjcxZDJiNWUzNDVlMDk1ZGNkZjQ1ODIyOTc2YzFmYWJiODAxNTAzMDkwNzI2NDYzMWNjMGVmMSqQF48=: 00:28:45.847 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:45.848 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:45.848 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:45.848 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzdiNzAzMTBiMjcxZDJiNWUzNDVlMDk1ZGNkZjQ1ODIyOTc2YzFmYWJiODAxNTAzMDkwNzI2NDYzMWNjMGVmMSqQF48=: 00:28:45.848 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:45.848 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:28:45.848 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:45.848 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:45.848 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:45.848 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:45.848 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:45.848 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:45.848 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.848 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.108 nvme0n1 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDgyNDRhYzRmYjRiZTA3ODhhN2Y2ZTk1OTAzNDgyNTRKWIDk: 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWRkMmYyNDNkMWE3ZmFkNmEwMDljYmQ4YTBkYjcwMGZlMjJiMzAzMWVjMWZmNWI3MTlkMWQ2ZDJlZDU3NjMwMgqpwkE=: 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDgyNDRhYzRmYjRiZTA3ODhhN2Y2ZTk1OTAzNDgyNTRKWIDk: 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWRkMmYyNDNkMWE3ZmFkNmEwMDljYmQ4YTBkYjcwMGZlMjJiMzAzMWVjMWZmNWI3MTlkMWQ2ZDJlZDU3NjMwMgqpwkE=: ]] 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWRkMmYyNDNkMWE3ZmFkNmEwMDljYmQ4YTBkYjcwMGZlMjJiMzAzMWVjMWZmNWI3MTlkMWQ2ZDJlZDU3NjMwMgqpwkE=: 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.108 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.369 nvme0n1 00:28:46.369 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.369 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:46.369 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.369 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:46.369 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.369 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.369 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:46.369 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:46.369 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.369 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.369 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.369 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:46.369 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:28:46.369 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:46.369 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:46.369 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:46.369 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:46.369 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmUyNWJlMDczMzllZmVlNDEwNzc0N2RiMTE4MjllMDQ2NmNhNmM3ZGYwOGZkNTIzL2wIVA==: 00:28:46.369 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmNlOWFmOWE4Mjk2Y2MyN2Y3ODk3ZmMwMWNhY2FlY2VmMzBkYTM3ZTA4MTZhNjM2yZBA8Q==: 00:28:46.369 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:46.369 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:46.369 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmUyNWJlMDczMzllZmVlNDEwNzc0N2RiMTE4MjllMDQ2NmNhNmM3ZGYwOGZkNTIzL2wIVA==: 00:28:46.369 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmNlOWFmOWE4Mjk2Y2MyN2Y3ODk3ZmMwMWNhY2FlY2VmMzBkYTM3ZTA4MTZhNjM2yZBA8Q==: ]] 00:28:46.369 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmNlOWFmOWE4Mjk2Y2MyN2Y3ODk3ZmMwMWNhY2FlY2VmMzBkYTM3ZTA4MTZhNjM2yZBA8Q==: 00:28:46.369 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:28:46.369 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:46.369 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:46.369 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:46.369 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:46.369 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:46.369 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:46.369 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.369 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.369 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.369 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:46.369 10:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:46.369 10:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:46.369 10:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:46.369 10:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:46.369 10:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:46.369 10:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:46.369 10:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:46.369 10:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:46.369 10:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:46.369 10:09:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:46.369 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:46.369 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.369 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.369 nvme0n1 00:28:46.369 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.369 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:46.369 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.369 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.369 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:46.369 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.629 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:46.629 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:46.629 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.630 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.630 10:09:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.630 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:46.630 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:28:46.630 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:46.630 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:46.630 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:46.630 10:09:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmVhYmU3MTA5ZGQ4MTAyNDkwNDQ3ZDUzY2JhMmY0M2UwyzRB: 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmM4ODI3ZjI0MDNlNDNjNWYyZDczNjY0YTM5ODJiZTBV5Kvy: 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmVhYmU3MTA5ZGQ4MTAyNDkwNDQ3ZDUzY2JhMmY0M2UwyzRB: 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmM4ODI3ZjI0MDNlNDNjNWYyZDczNjY0YTM5ODJiZTBV5Kvy: ]] 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmM4ODI3ZjI0MDNlNDNjNWYyZDczNjY0YTM5ODJiZTBV5Kvy: 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.630 nvme0n1 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGFiNzdiY2Q0NWEwOGNlMTRjNDliODA1NTM0MTE0NGE2NzIwMmUyZDBhNzhiYTg5You87w==: 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTc0YmNjMzk4Njk4MzU4YzhhZjRjNzIwZjhhYzA1NzdfNrbE: 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGFiNzdiY2Q0NWEwOGNlMTRjNDliODA1NTM0MTE0NGE2NzIwMmUyZDBhNzhiYTg5You87w==: 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTc0YmNjMzk4Njk4MzU4YzhhZjRjNzIwZjhhYzA1NzdfNrbE: ]] 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTc0YmNjMzk4Njk4MzU4YzhhZjRjNzIwZjhhYzA1NzdfNrbE: 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:46.630 10:10:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:46.890 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:46.890 10:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.890 10:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.890 nvme0n1 00:28:46.890 10:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.890 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:46.890 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:46.890 10:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.890 10:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.890 10:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.890 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:46.890 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:46.890 10:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.890 10:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.890 10:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.890 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:46.890 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:28:46.890 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:46.890 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:46.890 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:46.890 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:46.890 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzdiNzAzMTBiMjcxZDJiNWUzNDVlMDk1ZGNkZjQ1ODIyOTc2YzFmYWJiODAxNTAzMDkwNzI2NDYzMWNjMGVmMSqQF48=: 00:28:46.890 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:46.890 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:46.890 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:46.890 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzdiNzAzMTBiMjcxZDJiNWUzNDVlMDk1ZGNkZjQ1ODIyOTc2YzFmYWJiODAxNTAzMDkwNzI2NDYzMWNjMGVmMSqQF48=: 00:28:46.890 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:46.890 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:28:46.890 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:46.890 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:46.890 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:46.890 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:46.890 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:46.890 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:46.890 10:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.890 10:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.890 10:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.890 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:46.890 10:10:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:46.890 10:10:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:46.890 10:10:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:46.890 10:10:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:46.890 10:10:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:46.890 10:10:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:46.890 10:10:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:46.891 10:10:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:46.891 10:10:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:46.891 10:10:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:46.891 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:46.891 10:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.891 10:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.150 nvme0n1 00:28:47.150 10:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.150 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:47.150 10:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.150 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:47.150 10:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.150 10:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.150 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:47.150 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:47.150 10:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.150 10:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.150 10:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.150 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:47.150 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:47.150 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:28:47.150 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:47.150 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:47.150 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:47.150 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:47.150 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDgyNDRhYzRmYjRiZTA3ODhhN2Y2ZTk1OTAzNDgyNTRKWIDk: 00:28:47.150 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWRkMmYyNDNkMWE3ZmFkNmEwMDljYmQ4YTBkYjcwMGZlMjJiMzAzMWVjMWZmNWI3MTlkMWQ2ZDJlZDU3NjMwMgqpwkE=: 00:28:47.150 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:47.150 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:47.150 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDgyNDRhYzRmYjRiZTA3ODhhN2Y2ZTk1OTAzNDgyNTRKWIDk: 00:28:47.150 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWRkMmYyNDNkMWE3ZmFkNmEwMDljYmQ4YTBkYjcwMGZlMjJiMzAzMWVjMWZmNWI3MTlkMWQ2ZDJlZDU3NjMwMgqpwkE=: ]] 00:28:47.150 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWRkMmYyNDNkMWE3ZmFkNmEwMDljYmQ4YTBkYjcwMGZlMjJiMzAzMWVjMWZmNWI3MTlkMWQ2ZDJlZDU3NjMwMgqpwkE=: 00:28:47.150 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:28:47.150 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:47.150 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:47.150 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:47.150 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:47.150 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:47.150 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:47.150 10:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.150 10:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.150 10:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.150 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:47.150 10:10:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:47.150 10:10:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:47.150 10:10:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:47.150 10:10:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:47.150 10:10:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:47.150 10:10:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:47.150 10:10:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:47.150 10:10:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:47.150 10:10:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:47.150 10:10:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:47.150 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:47.150 10:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.150 10:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.410 nvme0n1 00:28:47.410 10:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.410 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:47.410 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:47.410 10:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.410 10:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.410 10:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.410 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:47.410 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:47.410 10:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.410 10:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.410 10:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.410 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:47.410 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:28:47.410 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:47.410 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:47.410 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:47.410 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:47.410 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmUyNWJlMDczMzllZmVlNDEwNzc0N2RiMTE4MjllMDQ2NmNhNmM3ZGYwOGZkNTIzL2wIVA==: 00:28:47.410 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmNlOWFmOWE4Mjk2Y2MyN2Y3ODk3ZmMwMWNhY2FlY2VmMzBkYTM3ZTA4MTZhNjM2yZBA8Q==: 00:28:47.410 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:47.410 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:47.410 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmUyNWJlMDczMzllZmVlNDEwNzc0N2RiMTE4MjllMDQ2NmNhNmM3ZGYwOGZkNTIzL2wIVA==: 00:28:47.410 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmNlOWFmOWE4Mjk2Y2MyN2Y3ODk3ZmMwMWNhY2FlY2VmMzBkYTM3ZTA4MTZhNjM2yZBA8Q==: ]] 00:28:47.410 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmNlOWFmOWE4Mjk2Y2MyN2Y3ODk3ZmMwMWNhY2FlY2VmMzBkYTM3ZTA4MTZhNjM2yZBA8Q==: 00:28:47.410 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:28:47.410 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:47.410 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:47.410 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:47.410 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:47.410 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:47.410 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:47.410 10:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.410 10:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.410 10:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.410 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:47.410 10:10:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:47.410 10:10:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:47.410 10:10:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:47.410 10:10:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:47.410 10:10:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:47.410 10:10:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:47.410 10:10:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:47.410 10:10:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:47.410 10:10:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:47.410 10:10:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:47.410 10:10:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:47.410 10:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.410 10:10:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.670 nvme0n1 00:28:47.670 10:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.670 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:47.670 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:47.670 10:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.670 10:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.670 10:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.670 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:47.670 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:47.670 10:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.670 10:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.670 10:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.670 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:47.670 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:28:47.670 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:47.670 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:47.670 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:47.670 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:47.670 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmVhYmU3MTA5ZGQ4MTAyNDkwNDQ3ZDUzY2JhMmY0M2UwyzRB: 00:28:47.670 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmM4ODI3ZjI0MDNlNDNjNWYyZDczNjY0YTM5ODJiZTBV5Kvy: 00:28:47.670 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:47.670 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:47.670 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmVhYmU3MTA5ZGQ4MTAyNDkwNDQ3ZDUzY2JhMmY0M2UwyzRB: 00:28:47.670 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmM4ODI3ZjI0MDNlNDNjNWYyZDczNjY0YTM5ODJiZTBV5Kvy: ]] 00:28:47.670 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmM4ODI3ZjI0MDNlNDNjNWYyZDczNjY0YTM5ODJiZTBV5Kvy: 00:28:47.670 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:28:47.670 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:47.670 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:47.670 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:47.670 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:47.670 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:47.670 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:47.670 10:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.670 10:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.671 10:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.671 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:47.671 10:10:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:47.671 10:10:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:47.671 10:10:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:47.671 10:10:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:47.671 10:10:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:47.671 10:10:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:47.671 10:10:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:47.671 10:10:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:47.671 10:10:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:47.671 10:10:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:47.671 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:47.671 10:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.671 10:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.930 nvme0n1 00:28:47.930 10:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.930 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:47.930 10:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.930 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:47.930 10:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.930 10:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.930 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:47.930 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:47.930 10:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.930 10:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.930 10:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.930 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:47.930 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:28:47.930 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:47.930 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:47.930 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:47.930 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:47.930 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGFiNzdiY2Q0NWEwOGNlMTRjNDliODA1NTM0MTE0NGE2NzIwMmUyZDBhNzhiYTg5You87w==: 00:28:47.930 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTc0YmNjMzk4Njk4MzU4YzhhZjRjNzIwZjhhYzA1NzdfNrbE: 00:28:47.930 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:47.930 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:47.930 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGFiNzdiY2Q0NWEwOGNlMTRjNDliODA1NTM0MTE0NGE2NzIwMmUyZDBhNzhiYTg5You87w==: 00:28:47.930 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTc0YmNjMzk4Njk4MzU4YzhhZjRjNzIwZjhhYzA1NzdfNrbE: ]] 00:28:47.930 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTc0YmNjMzk4Njk4MzU4YzhhZjRjNzIwZjhhYzA1NzdfNrbE: 00:28:47.930 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:28:47.930 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:47.930 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:47.930 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:47.930 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:47.930 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:47.930 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:47.930 10:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.930 10:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.930 10:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.930 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:47.930 10:10:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:47.930 10:10:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:47.930 10:10:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:47.930 10:10:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:47.930 10:10:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:47.930 10:10:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:47.930 10:10:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:47.930 10:10:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:47.930 10:10:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:47.930 10:10:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:47.930 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:47.930 10:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.930 10:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.190 nvme0n1 00:28:48.190 10:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.190 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:48.190 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:48.190 10:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.190 10:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.190 10:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.190 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:48.190 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:48.190 10:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.190 10:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.190 10:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.190 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:48.190 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:28:48.190 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:48.190 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:48.190 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:48.190 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:48.190 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzdiNzAzMTBiMjcxZDJiNWUzNDVlMDk1ZGNkZjQ1ODIyOTc2YzFmYWJiODAxNTAzMDkwNzI2NDYzMWNjMGVmMSqQF48=: 00:28:48.190 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:48.190 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:48.190 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:48.190 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzdiNzAzMTBiMjcxZDJiNWUzNDVlMDk1ZGNkZjQ1ODIyOTc2YzFmYWJiODAxNTAzMDkwNzI2NDYzMWNjMGVmMSqQF48=: 00:28:48.190 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:48.190 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:28:48.190 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:48.190 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:48.190 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:48.190 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:48.190 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:48.190 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:48.190 10:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.190 10:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.190 10:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.190 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:48.190 10:10:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:48.190 10:10:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:48.190 10:10:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:48.190 10:10:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:48.190 10:10:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:48.190 10:10:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:48.190 10:10:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:48.190 10:10:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:48.190 10:10:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:48.190 10:10:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:48.190 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:48.190 10:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.190 10:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.450 nvme0n1 00:28:48.450 10:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.450 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:48.450 10:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.450 10:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.450 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:48.450 10:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.450 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:48.450 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:48.450 10:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.450 10:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.450 10:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.450 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:48.450 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:48.450 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:28:48.450 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:48.450 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:48.450 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:48.450 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:48.450 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDgyNDRhYzRmYjRiZTA3ODhhN2Y2ZTk1OTAzNDgyNTRKWIDk: 00:28:48.450 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWRkMmYyNDNkMWE3ZmFkNmEwMDljYmQ4YTBkYjcwMGZlMjJiMzAzMWVjMWZmNWI3MTlkMWQ2ZDJlZDU3NjMwMgqpwkE=: 00:28:48.450 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:48.450 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:48.450 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDgyNDRhYzRmYjRiZTA3ODhhN2Y2ZTk1OTAzNDgyNTRKWIDk: 00:28:48.450 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWRkMmYyNDNkMWE3ZmFkNmEwMDljYmQ4YTBkYjcwMGZlMjJiMzAzMWVjMWZmNWI3MTlkMWQ2ZDJlZDU3NjMwMgqpwkE=: ]] 00:28:48.450 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWRkMmYyNDNkMWE3ZmFkNmEwMDljYmQ4YTBkYjcwMGZlMjJiMzAzMWVjMWZmNWI3MTlkMWQ2ZDJlZDU3NjMwMgqpwkE=: 00:28:48.450 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:28:48.450 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:48.450 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:48.450 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:48.450 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:48.450 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:48.450 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:48.450 10:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.450 10:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.450 10:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.450 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:48.450 10:10:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:48.450 10:10:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:48.450 10:10:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:48.450 10:10:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:48.450 10:10:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:48.450 10:10:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:48.450 10:10:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:48.450 10:10:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:48.450 10:10:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:48.450 10:10:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:48.450 10:10:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:48.450 10:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.450 10:10:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.710 nvme0n1 00:28:48.710 10:10:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.710 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:48.710 10:10:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.710 10:10:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.710 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:48.710 10:10:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.710 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:48.710 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:48.710 10:10:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.710 10:10:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.710 10:10:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.710 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:48.710 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:28:48.710 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:48.710 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:48.710 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:48.710 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:48.710 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmUyNWJlMDczMzllZmVlNDEwNzc0N2RiMTE4MjllMDQ2NmNhNmM3ZGYwOGZkNTIzL2wIVA==: 00:28:48.711 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmNlOWFmOWE4Mjk2Y2MyN2Y3ODk3ZmMwMWNhY2FlY2VmMzBkYTM3ZTA4MTZhNjM2yZBA8Q==: 00:28:48.711 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:48.711 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:48.711 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmUyNWJlMDczMzllZmVlNDEwNzc0N2RiMTE4MjllMDQ2NmNhNmM3ZGYwOGZkNTIzL2wIVA==: 00:28:48.711 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmNlOWFmOWE4Mjk2Y2MyN2Y3ODk3ZmMwMWNhY2FlY2VmMzBkYTM3ZTA4MTZhNjM2yZBA8Q==: ]] 00:28:48.711 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmNlOWFmOWE4Mjk2Y2MyN2Y3ODk3ZmMwMWNhY2FlY2VmMzBkYTM3ZTA4MTZhNjM2yZBA8Q==: 00:28:48.711 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:28:48.711 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:48.711 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:48.711 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:48.711 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:48.711 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:48.711 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:48.711 10:10:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.711 10:10:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.711 10:10:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.711 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:48.711 10:10:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:48.711 10:10:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:48.711 10:10:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:48.711 10:10:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:48.711 10:10:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:48.711 10:10:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:48.711 10:10:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:48.711 10:10:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:48.711 10:10:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:48.711 10:10:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:48.711 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:48.711 10:10:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.711 10:10:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.971 nvme0n1 00:28:48.971 10:10:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.971 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:48.971 10:10:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.971 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:48.971 10:10:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.971 10:10:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.231 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:49.231 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:49.231 10:10:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.231 10:10:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.231 10:10:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.231 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:49.231 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:28:49.231 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:49.231 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:49.231 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:49.231 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:49.231 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmVhYmU3MTA5ZGQ4MTAyNDkwNDQ3ZDUzY2JhMmY0M2UwyzRB: 00:28:49.231 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmM4ODI3ZjI0MDNlNDNjNWYyZDczNjY0YTM5ODJiZTBV5Kvy: 00:28:49.231 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:49.231 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:49.231 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmVhYmU3MTA5ZGQ4MTAyNDkwNDQ3ZDUzY2JhMmY0M2UwyzRB: 00:28:49.231 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmM4ODI3ZjI0MDNlNDNjNWYyZDczNjY0YTM5ODJiZTBV5Kvy: ]] 00:28:49.231 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmM4ODI3ZjI0MDNlNDNjNWYyZDczNjY0YTM5ODJiZTBV5Kvy: 00:28:49.231 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:28:49.231 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:49.231 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:49.231 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:49.231 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:49.231 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:49.231 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:49.231 10:10:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.231 10:10:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.231 10:10:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.231 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:49.231 10:10:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:49.231 10:10:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:49.231 10:10:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:49.231 10:10:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:49.231 10:10:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:49.231 10:10:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:49.231 10:10:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:49.231 10:10:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:49.231 10:10:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:49.231 10:10:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:49.231 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:49.231 10:10:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.231 10:10:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.491 nvme0n1 00:28:49.491 10:10:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.491 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:49.491 10:10:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.491 10:10:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.491 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:49.491 10:10:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.491 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:49.491 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:49.491 10:10:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.491 10:10:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.491 10:10:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.491 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:49.491 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:28:49.491 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:49.491 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:49.491 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:49.491 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:49.491 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGFiNzdiY2Q0NWEwOGNlMTRjNDliODA1NTM0MTE0NGE2NzIwMmUyZDBhNzhiYTg5You87w==: 00:28:49.491 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTc0YmNjMzk4Njk4MzU4YzhhZjRjNzIwZjhhYzA1NzdfNrbE: 00:28:49.491 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:49.491 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:49.491 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGFiNzdiY2Q0NWEwOGNlMTRjNDliODA1NTM0MTE0NGE2NzIwMmUyZDBhNzhiYTg5You87w==: 00:28:49.491 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTc0YmNjMzk4Njk4MzU4YzhhZjRjNzIwZjhhYzA1NzdfNrbE: ]] 00:28:49.491 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTc0YmNjMzk4Njk4MzU4YzhhZjRjNzIwZjhhYzA1NzdfNrbE: 00:28:49.491 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:28:49.491 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:49.491 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:49.491 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:49.491 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:49.491 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:49.491 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:49.491 10:10:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.491 10:10:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.491 10:10:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.491 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:49.491 10:10:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:49.491 10:10:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:49.491 10:10:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:49.491 10:10:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:49.491 10:10:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:49.491 10:10:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:49.491 10:10:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:49.491 10:10:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:49.492 10:10:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:49.492 10:10:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:49.492 10:10:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:49.492 10:10:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.492 10:10:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.754 nvme0n1 00:28:49.754 10:10:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.754 10:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:49.754 10:10:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.754 10:10:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.754 10:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:49.754 10:10:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.013 10:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:50.013 10:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:50.013 10:10:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.013 10:10:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.013 10:10:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.013 10:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:50.013 10:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:28:50.013 10:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:50.013 10:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:50.013 10:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:50.013 10:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:50.013 10:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzdiNzAzMTBiMjcxZDJiNWUzNDVlMDk1ZGNkZjQ1ODIyOTc2YzFmYWJiODAxNTAzMDkwNzI2NDYzMWNjMGVmMSqQF48=: 00:28:50.013 10:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:50.013 10:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:50.013 10:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:50.013 10:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzdiNzAzMTBiMjcxZDJiNWUzNDVlMDk1ZGNkZjQ1ODIyOTc2YzFmYWJiODAxNTAzMDkwNzI2NDYzMWNjMGVmMSqQF48=: 00:28:50.013 10:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:50.013 10:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:28:50.013 10:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:50.013 10:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:50.014 10:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:50.014 10:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:50.014 10:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:50.014 10:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:50.014 10:10:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.014 10:10:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.014 10:10:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.014 10:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:50.014 10:10:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:50.014 10:10:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:50.014 10:10:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:50.014 10:10:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:50.014 10:10:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:50.014 10:10:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:50.014 10:10:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:50.014 10:10:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:50.014 10:10:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:50.014 10:10:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:50.014 10:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:50.014 10:10:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.014 10:10:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.273 nvme0n1 00:28:50.273 10:10:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.273 10:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:50.273 10:10:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.273 10:10:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.273 10:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:50.273 10:10:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.273 10:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:50.273 10:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:50.273 10:10:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.273 10:10:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.273 10:10:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.273 10:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:50.273 10:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:50.273 10:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:28:50.273 10:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:50.273 10:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:50.273 10:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:50.273 10:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:50.273 10:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDgyNDRhYzRmYjRiZTA3ODhhN2Y2ZTk1OTAzNDgyNTRKWIDk: 00:28:50.273 10:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWRkMmYyNDNkMWE3ZmFkNmEwMDljYmQ4YTBkYjcwMGZlMjJiMzAzMWVjMWZmNWI3MTlkMWQ2ZDJlZDU3NjMwMgqpwkE=: 00:28:50.273 10:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:50.273 10:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:50.273 10:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDgyNDRhYzRmYjRiZTA3ODhhN2Y2ZTk1OTAzNDgyNTRKWIDk: 00:28:50.273 10:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWRkMmYyNDNkMWE3ZmFkNmEwMDljYmQ4YTBkYjcwMGZlMjJiMzAzMWVjMWZmNWI3MTlkMWQ2ZDJlZDU3NjMwMgqpwkE=: ]] 00:28:50.273 10:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWRkMmYyNDNkMWE3ZmFkNmEwMDljYmQ4YTBkYjcwMGZlMjJiMzAzMWVjMWZmNWI3MTlkMWQ2ZDJlZDU3NjMwMgqpwkE=: 00:28:50.273 10:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:28:50.273 10:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:50.273 10:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:50.273 10:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:50.273 10:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:50.273 10:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:50.273 10:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:50.273 10:10:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.273 10:10:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.273 10:10:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.273 10:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:50.273 10:10:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:50.273 10:10:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:50.273 10:10:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:50.273 10:10:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:50.273 10:10:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:50.273 10:10:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:50.273 10:10:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:50.273 10:10:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:50.273 10:10:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:50.273 10:10:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:50.273 10:10:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:50.273 10:10:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.273 10:10:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.842 nvme0n1 00:28:50.842 10:10:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.842 10:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:50.842 10:10:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.842 10:10:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.842 10:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:50.842 10:10:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.842 10:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:50.842 10:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:50.842 10:10:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.842 10:10:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.842 10:10:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.842 10:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:50.842 10:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:28:50.842 10:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:50.842 10:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:50.842 10:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:50.842 10:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:50.842 10:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmUyNWJlMDczMzllZmVlNDEwNzc0N2RiMTE4MjllMDQ2NmNhNmM3ZGYwOGZkNTIzL2wIVA==: 00:28:50.842 10:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmNlOWFmOWE4Mjk2Y2MyN2Y3ODk3ZmMwMWNhY2FlY2VmMzBkYTM3ZTA4MTZhNjM2yZBA8Q==: 00:28:50.842 10:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:50.842 10:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:50.842 10:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmUyNWJlMDczMzllZmVlNDEwNzc0N2RiMTE4MjllMDQ2NmNhNmM3ZGYwOGZkNTIzL2wIVA==: 00:28:50.842 10:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmNlOWFmOWE4Mjk2Y2MyN2Y3ODk3ZmMwMWNhY2FlY2VmMzBkYTM3ZTA4MTZhNjM2yZBA8Q==: ]] 00:28:50.842 10:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmNlOWFmOWE4Mjk2Y2MyN2Y3ODk3ZmMwMWNhY2FlY2VmMzBkYTM3ZTA4MTZhNjM2yZBA8Q==: 00:28:50.842 10:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:28:50.842 10:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:50.842 10:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:50.842 10:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:50.842 10:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:50.842 10:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:50.842 10:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:50.842 10:10:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.842 10:10:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.842 10:10:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.842 10:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:50.842 10:10:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:50.842 10:10:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:50.842 10:10:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:50.842 10:10:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:50.842 10:10:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:50.842 10:10:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:50.842 10:10:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:50.842 10:10:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:50.842 10:10:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:50.842 10:10:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:50.842 10:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:50.842 10:10:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.842 10:10:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.412 nvme0n1 00:28:51.412 10:10:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.412 10:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:51.412 10:10:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.412 10:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:51.412 10:10:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.412 10:10:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.412 10:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:51.412 10:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:51.412 10:10:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.412 10:10:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.412 10:10:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.412 10:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:51.412 10:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:28:51.412 10:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:51.412 10:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:51.412 10:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:51.412 10:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:51.412 10:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmVhYmU3MTA5ZGQ4MTAyNDkwNDQ3ZDUzY2JhMmY0M2UwyzRB: 00:28:51.412 10:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmM4ODI3ZjI0MDNlNDNjNWYyZDczNjY0YTM5ODJiZTBV5Kvy: 00:28:51.412 10:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:51.412 10:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:51.412 10:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmVhYmU3MTA5ZGQ4MTAyNDkwNDQ3ZDUzY2JhMmY0M2UwyzRB: 00:28:51.412 10:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmM4ODI3ZjI0MDNlNDNjNWYyZDczNjY0YTM5ODJiZTBV5Kvy: ]] 00:28:51.412 10:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmM4ODI3ZjI0MDNlNDNjNWYyZDczNjY0YTM5ODJiZTBV5Kvy: 00:28:51.412 10:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:28:51.412 10:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:51.412 10:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:51.412 10:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:51.412 10:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:51.412 10:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:51.412 10:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:51.412 10:10:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.412 10:10:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.412 10:10:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.412 10:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:51.412 10:10:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:51.412 10:10:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:51.412 10:10:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:51.412 10:10:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:51.412 10:10:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:51.412 10:10:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:51.412 10:10:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:51.412 10:10:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:51.412 10:10:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:51.412 10:10:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:51.412 10:10:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:51.412 10:10:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.412 10:10:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.000 nvme0n1 00:28:52.000 10:10:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.000 10:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:52.000 10:10:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.000 10:10:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.000 10:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:52.000 10:10:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.000 10:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:52.000 10:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:52.000 10:10:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.000 10:10:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.000 10:10:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.000 10:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:52.000 10:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:28:52.000 10:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:52.000 10:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:52.000 10:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:52.000 10:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:52.000 10:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGFiNzdiY2Q0NWEwOGNlMTRjNDliODA1NTM0MTE0NGE2NzIwMmUyZDBhNzhiYTg5You87w==: 00:28:52.000 10:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTc0YmNjMzk4Njk4MzU4YzhhZjRjNzIwZjhhYzA1NzdfNrbE: 00:28:52.000 10:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:52.000 10:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:52.000 10:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGFiNzdiY2Q0NWEwOGNlMTRjNDliODA1NTM0MTE0NGE2NzIwMmUyZDBhNzhiYTg5You87w==: 00:28:52.000 10:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTc0YmNjMzk4Njk4MzU4YzhhZjRjNzIwZjhhYzA1NzdfNrbE: ]] 00:28:52.000 10:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTc0YmNjMzk4Njk4MzU4YzhhZjRjNzIwZjhhYzA1NzdfNrbE: 00:28:52.000 10:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:28:52.000 10:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:52.000 10:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:52.000 10:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:52.000 10:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:52.000 10:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:52.000 10:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:52.000 10:10:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.000 10:10:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.000 10:10:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.000 10:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:52.000 10:10:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:52.001 10:10:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:52.001 10:10:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:52.001 10:10:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:52.001 10:10:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:52.001 10:10:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:52.001 10:10:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:52.001 10:10:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:52.001 10:10:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:52.001 10:10:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:52.001 10:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:52.001 10:10:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.001 10:10:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.607 nvme0n1 00:28:52.607 10:10:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.607 10:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:52.607 10:10:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:52.607 10:10:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.607 10:10:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.607 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.607 10:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:52.607 10:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:52.607 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.607 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.608 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.608 10:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:52.608 10:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:28:52.608 10:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:52.608 10:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:52.608 10:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:52.608 10:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:52.608 10:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzdiNzAzMTBiMjcxZDJiNWUzNDVlMDk1ZGNkZjQ1ODIyOTc2YzFmYWJiODAxNTAzMDkwNzI2NDYzMWNjMGVmMSqQF48=: 00:28:52.608 10:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:52.608 10:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:52.608 10:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:52.608 10:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzdiNzAzMTBiMjcxZDJiNWUzNDVlMDk1ZGNkZjQ1ODIyOTc2YzFmYWJiODAxNTAzMDkwNzI2NDYzMWNjMGVmMSqQF48=: 00:28:52.608 10:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:52.608 10:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:28:52.608 10:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:52.608 10:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:52.608 10:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:52.608 10:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:52.608 10:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:52.608 10:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:52.608 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.608 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.608 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.608 10:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:52.608 10:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:52.608 10:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:52.608 10:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:52.608 10:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:52.608 10:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:52.608 10:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:52.608 10:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:52.608 10:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:52.608 10:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:52.608 10:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:52.608 10:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:52.608 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.608 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.177 nvme0n1 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmUyNWJlMDczMzllZmVlNDEwNzc0N2RiMTE4MjllMDQ2NmNhNmM3ZGYwOGZkNTIzL2wIVA==: 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmNlOWFmOWE4Mjk2Y2MyN2Y3ODk3ZmMwMWNhY2FlY2VmMzBkYTM3ZTA4MTZhNjM2yZBA8Q==: 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmUyNWJlMDczMzllZmVlNDEwNzc0N2RiMTE4MjllMDQ2NmNhNmM3ZGYwOGZkNTIzL2wIVA==: 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmNlOWFmOWE4Mjk2Y2MyN2Y3ODk3ZmMwMWNhY2FlY2VmMzBkYTM3ZTA4MTZhNjM2yZBA8Q==: ]] 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmNlOWFmOWE4Mjk2Y2MyN2Y3ODk3ZmMwMWNhY2FlY2VmMzBkYTM3ZTA4MTZhNjM2yZBA8Q==: 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.177 2024/07/15 10:10:06 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:28:53.177 request: 00:28:53.177 { 00:28:53.177 "method": "bdev_nvme_attach_controller", 00:28:53.177 "params": { 00:28:53.177 "name": "nvme0", 00:28:53.177 "trtype": "tcp", 00:28:53.177 "traddr": "10.0.0.1", 00:28:53.177 "adrfam": "ipv4", 00:28:53.177 "trsvcid": "4420", 00:28:53.177 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:53.177 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:53.177 "prchk_reftag": false, 00:28:53.177 "prchk_guard": false, 00:28:53.177 "hdgst": false, 00:28:53.177 "ddgst": false 00:28:53.177 } 00:28:53.177 } 00:28:53.177 Got JSON-RPC error response 00:28:53.177 GoRPCClient: error on JSON-RPC call 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.177 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.437 2024/07/15 10:10:06 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:28:53.437 request: 00:28:53.437 { 00:28:53.437 "method": "bdev_nvme_attach_controller", 00:28:53.437 "params": { 00:28:53.437 "name": "nvme0", 00:28:53.437 "trtype": "tcp", 00:28:53.437 "traddr": "10.0.0.1", 00:28:53.437 "adrfam": "ipv4", 00:28:53.437 "trsvcid": "4420", 00:28:53.437 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:53.437 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:53.437 "prchk_reftag": false, 00:28:53.437 "prchk_guard": false, 00:28:53.437 "hdgst": false, 00:28:53.437 "ddgst": false, 00:28:53.437 "dhchap_key": "key2" 00:28:53.437 } 00:28:53.437 } 00:28:53.437 Got JSON-RPC error response 00:28:53.437 GoRPCClient: error on JSON-RPC call 00:28:53.437 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:53.437 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:28:53.437 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:53.437 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:53.437 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:53.437 10:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:28:53.437 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.437 10:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:28:53.437 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.437 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.437 10:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:28:53.437 10:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:28:53.437 10:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:53.437 10:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:53.437 10:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:53.437 10:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:53.437 10:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:53.437 10:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:53.437 10:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:53.437 10:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:53.437 10:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:53.437 10:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:53.437 10:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:53.437 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:28:53.437 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:53.437 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:53.437 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:53.437 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:53.437 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:53.437 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:53.437 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.437 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.437 2024/07/15 10:10:06 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:28:53.437 request: 00:28:53.437 { 00:28:53.438 "method": "bdev_nvme_attach_controller", 00:28:53.438 "params": { 00:28:53.438 "name": "nvme0", 00:28:53.438 "trtype": "tcp", 00:28:53.438 "traddr": "10.0.0.1", 00:28:53.438 "adrfam": "ipv4", 00:28:53.438 "trsvcid": "4420", 00:28:53.438 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:53.438 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:53.438 "prchk_reftag": false, 00:28:53.438 "prchk_guard": false, 00:28:53.438 "hdgst": false, 00:28:53.438 "ddgst": false, 00:28:53.438 "dhchap_key": "key1", 00:28:53.438 "dhchap_ctrlr_key": "ckey2" 00:28:53.438 } 00:28:53.438 } 00:28:53.438 Got JSON-RPC error response 00:28:53.438 GoRPCClient: error on JSON-RPC call 00:28:53.438 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:53.438 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:28:53.438 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:53.438 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:53.438 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:53.438 10:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:28:53.438 10:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:28:53.438 10:10:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:28:53.438 10:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:53.438 10:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:28:53.438 10:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:53.438 10:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:28:53.438 10:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:53.438 10:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:53.438 rmmod nvme_tcp 00:28:53.438 rmmod nvme_fabrics 00:28:53.438 10:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:53.438 10:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:28:53.438 10:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:28:53.438 10:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 91431 ']' 00:28:53.438 10:10:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 91431 00:28:53.438 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 91431 ']' 00:28:53.438 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 91431 00:28:53.438 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:28:53.438 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:53.438 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91431 00:28:53.438 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:53.438 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:53.438 killing process with pid 91431 00:28:53.438 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91431' 00:28:53.438 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 91431 00:28:53.438 10:10:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 91431 00:28:53.697 10:10:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:53.697 10:10:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:53.697 10:10:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:53.697 10:10:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:53.697 10:10:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:53.697 10:10:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:53.697 10:10:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:53.697 10:10:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:53.697 10:10:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:28:53.697 10:10:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:53.697 10:10:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:53.697 10:10:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:28:53.697 10:10:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:28:53.697 10:10:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:28:53.697 10:10:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:53.697 10:10:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:53.697 10:10:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:53.697 10:10:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:53.697 10:10:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:28:53.697 10:10:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:28:53.697 10:10:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:54.635 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:54.635 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:28:54.635 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:28:54.635 10:10:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.318 /tmp/spdk.key-null.5mw /tmp/spdk.key-sha256.d3r /tmp/spdk.key-sha384.3uH /tmp/spdk.key-sha512.HvZ /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:28:54.895 10:10:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:55.153 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:55.153 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:28:55.153 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:28:55.153 00:28:55.153 real 0m32.377s 00:28:55.153 user 0m30.242s 00:28:55.153 sys 0m4.562s 00:28:55.153 10:10:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:55.153 10:10:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.153 ************************************ 00:28:55.153 END TEST nvmf_auth_host 00:28:55.153 ************************************ 00:28:55.412 10:10:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:55.412 10:10:08 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:28:55.412 10:10:08 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:55.412 10:10:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:55.412 10:10:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:55.412 10:10:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:55.412 ************************************ 00:28:55.412 START TEST nvmf_digest 00:28:55.412 ************************************ 00:28:55.412 10:10:08 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:55.412 * Looking for test storage... 00:28:55.412 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:28:55.412 10:10:08 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:55.412 10:10:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:28:55.412 10:10:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:55.412 10:10:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:55.412 10:10:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:55.412 10:10:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:55.412 10:10:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:55.412 10:10:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:55.412 10:10:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:55.412 10:10:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:55.412 10:10:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:55.412 10:10:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:55.412 10:10:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:28:55.412 10:10:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:28:55.412 10:10:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:55.412 10:10:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:55.412 10:10:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:55.412 10:10:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:55.412 10:10:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:55.412 10:10:08 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:55.412 10:10:08 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:55.412 10:10:08 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:55.412 10:10:08 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.412 10:10:08 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.412 10:10:08 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.412 10:10:08 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:28:55.412 10:10:08 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.412 10:10:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:28:55.413 10:10:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:55.413 10:10:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:55.413 10:10:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:55.413 10:10:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:55.413 10:10:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:55.413 10:10:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:55.413 10:10:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:55.413 10:10:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:55.413 10:10:08 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:55.413 10:10:08 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:55.413 10:10:08 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:28:55.413 10:10:08 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:55.413 10:10:08 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:28:55.413 10:10:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:55.413 10:10:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:55.413 10:10:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:55.413 10:10:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:55.413 10:10:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:55.413 10:10:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:55.413 10:10:08 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:55.413 10:10:08 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:55.413 10:10:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:28:55.413 10:10:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:28:55.413 10:10:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:28:55.413 10:10:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:28:55.413 10:10:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:28:55.413 10:10:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:28:55.413 10:10:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:55.413 10:10:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:55.413 10:10:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:28:55.413 10:10:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:28:55.413 10:10:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:55.413 10:10:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:55.413 10:10:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:55.413 10:10:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:55.413 10:10:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:55.413 10:10:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:55.413 10:10:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:55.413 10:10:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:55.413 10:10:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:28:55.413 10:10:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:28:55.413 Cannot find device "nvmf_tgt_br" 00:28:55.413 10:10:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # true 00:28:55.413 10:10:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:28:55.413 Cannot find device "nvmf_tgt_br2" 00:28:55.413 10:10:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # true 00:28:55.413 10:10:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:28:55.413 10:10:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:28:55.413 Cannot find device "nvmf_tgt_br" 00:28:55.413 10:10:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # true 00:28:55.413 10:10:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:28:55.671 Cannot find device "nvmf_tgt_br2" 00:28:55.671 10:10:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # true 00:28:55.671 10:10:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:28:55.671 10:10:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:28:55.671 10:10:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:55.671 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:55.671 10:10:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # true 00:28:55.671 10:10:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:55.671 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:55.671 10:10:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # true 00:28:55.671 10:10:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:28:55.671 10:10:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:55.671 10:10:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:55.671 10:10:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:55.671 10:10:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:55.671 10:10:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:55.671 10:10:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:55.671 10:10:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:28:55.671 10:10:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:28:55.671 10:10:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:28:55.671 10:10:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:28:55.671 10:10:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:28:55.671 10:10:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:28:55.671 10:10:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:55.671 10:10:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:55.671 10:10:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:55.671 10:10:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:28:55.671 10:10:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:28:55.671 10:10:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:28:55.671 10:10:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:55.671 10:10:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:55.671 10:10:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:55.671 10:10:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:55.671 10:10:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:28:55.671 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:55.671 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:28:55.671 00:28:55.671 --- 10.0.0.2 ping statistics --- 00:28:55.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:55.671 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:28:55.671 10:10:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:28:55.671 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:55.671 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.031 ms 00:28:55.671 00:28:55.671 --- 10.0.0.3 ping statistics --- 00:28:55.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:55.671 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:28:55.671 10:10:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:55.671 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:55.671 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:28:55.671 00:28:55.671 --- 10.0.0.1 ping statistics --- 00:28:55.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:55.671 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:28:55.671 10:10:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:55.671 10:10:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:28:55.671 10:10:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:55.671 10:10:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:55.671 10:10:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:55.671 10:10:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:55.671 10:10:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:55.671 10:10:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:55.671 10:10:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:55.671 10:10:09 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:55.671 10:10:09 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:55.671 10:10:09 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:55.671 10:10:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:55.671 10:10:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:55.671 10:10:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:55.930 ************************************ 00:28:55.930 START TEST nvmf_digest_clean 00:28:55.930 ************************************ 00:28:55.930 10:10:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:28:55.930 10:10:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:28:55.930 10:10:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:55.930 10:10:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:28:55.930 10:10:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:55.930 10:10:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:55.930 10:10:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:55.930 10:10:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:55.930 10:10:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:55.930 10:10:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=92994 00:28:55.930 10:10:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:55.930 10:10:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 92994 00:28:55.930 10:10:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 92994 ']' 00:28:55.930 10:10:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:55.930 10:10:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:55.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:55.930 10:10:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:55.930 10:10:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:55.930 10:10:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:55.930 [2024-07-15 10:10:09.301321] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:28:55.930 [2024-07-15 10:10:09.301393] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:55.930 [2024-07-15 10:10:09.438499] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:56.189 [2024-07-15 10:10:09.541369] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:56.189 [2024-07-15 10:10:09.541418] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:56.189 [2024-07-15 10:10:09.541424] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:56.189 [2024-07-15 10:10:09.541429] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:56.189 [2024-07-15 10:10:09.541433] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:56.189 [2024-07-15 10:10:09.541469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:56.754 10:10:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:56.754 10:10:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:56.754 10:10:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:56.754 10:10:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:56.754 10:10:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:56.754 10:10:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:56.754 10:10:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:56.754 10:10:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:28:56.754 10:10:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:28:56.754 10:10:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.754 10:10:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:56.754 null0 00:28:56.754 [2024-07-15 10:10:10.299895] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:56.754 [2024-07-15 10:10:10.323942] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:56.754 10:10:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.754 10:10:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:56.754 10:10:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:56.754 10:10:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:56.754 10:10:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:56.754 10:10:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:56.754 10:10:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:56.754 10:10:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:56.754 10:10:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93044 00:28:56.754 10:10:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93044 /var/tmp/bperf.sock 00:28:56.754 10:10:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:56.754 10:10:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 93044 ']' 00:28:56.754 10:10:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:56.755 10:10:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:56.755 10:10:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:56.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:56.755 10:10:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:56.755 10:10:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:57.013 [2024-07-15 10:10:10.383141] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:28:57.013 [2024-07-15 10:10:10.383229] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93044 ] 00:28:57.013 [2024-07-15 10:10:10.520768] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:57.273 [2024-07-15 10:10:10.636940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:57.840 10:10:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:57.840 10:10:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:57.840 10:10:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:57.840 10:10:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:57.840 10:10:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:58.099 10:10:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:58.099 10:10:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:58.357 nvme0n1 00:28:58.357 10:10:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:58.357 10:10:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:58.357 Running I/O for 2 seconds... 00:29:00.898 00:29:00.898 Latency(us) 00:29:00.898 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:00.898 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:00.898 nvme0n1 : 2.00 24466.65 95.57 0.00 0.00 5226.17 2589.96 13450.62 00:29:00.898 =================================================================================================================== 00:29:00.898 Total : 24466.65 95.57 0.00 0.00 5226.17 2589.96 13450.62 00:29:00.898 0 00:29:00.898 10:10:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:00.898 10:10:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:00.898 10:10:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:00.898 | select(.opcode=="crc32c") 00:29:00.898 | "\(.module_name) \(.executed)"' 00:29:00.898 10:10:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:00.898 10:10:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:00.898 10:10:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:00.898 10:10:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:00.898 10:10:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:00.898 10:10:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:00.898 10:10:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93044 00:29:00.898 10:10:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 93044 ']' 00:29:00.898 10:10:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 93044 00:29:00.898 10:10:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:29:00.898 10:10:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:00.898 10:10:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93044 00:29:00.898 10:10:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:00.898 10:10:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:00.898 killing process with pid 93044 00:29:00.898 10:10:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93044' 00:29:00.898 10:10:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 93044 00:29:00.898 Received shutdown signal, test time was about 2.000000 seconds 00:29:00.898 00:29:00.898 Latency(us) 00:29:00.898 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:00.898 =================================================================================================================== 00:29:00.898 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:00.898 10:10:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 93044 00:29:00.898 10:10:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:29:00.898 10:10:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:00.898 10:10:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:00.898 10:10:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:00.898 10:10:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:00.898 10:10:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:00.898 10:10:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:00.898 10:10:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93132 00:29:00.898 10:10:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93132 /var/tmp/bperf.sock 00:29:00.898 10:10:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:00.898 10:10:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 93132 ']' 00:29:00.898 10:10:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:00.898 10:10:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:00.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:00.898 10:10:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:00.898 10:10:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:00.898 10:10:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:00.898 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:00.898 Zero copy mechanism will not be used. 00:29:00.898 [2024-07-15 10:10:14.371758] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:29:00.898 [2024-07-15 10:10:14.371825] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93132 ] 00:29:01.156 [2024-07-15 10:10:14.506210] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:01.156 [2024-07-15 10:10:14.610833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:01.721 10:10:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:01.721 10:10:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:29:01.721 10:10:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:01.721 10:10:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:01.722 10:10:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:01.979 10:10:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:01.979 10:10:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:02.237 nvme0n1 00:29:02.237 10:10:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:02.237 10:10:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:02.496 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:02.496 Zero copy mechanism will not be used. 00:29:02.496 Running I/O for 2 seconds... 00:29:04.421 00:29:04.421 Latency(us) 00:29:04.421 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:04.421 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:04.421 nvme0n1 : 2.00 9668.08 1208.51 0.00 0.00 1652.13 479.36 3348.35 00:29:04.421 =================================================================================================================== 00:29:04.421 Total : 9668.08 1208.51 0.00 0.00 1652.13 479.36 3348.35 00:29:04.421 0 00:29:04.421 10:10:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:04.421 10:10:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:04.421 10:10:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:04.421 | select(.opcode=="crc32c") 00:29:04.421 | "\(.module_name) \(.executed)"' 00:29:04.421 10:10:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:04.421 10:10:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:04.717 10:10:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:04.717 10:10:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:04.717 10:10:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:04.717 10:10:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:04.717 10:10:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93132 00:29:04.717 10:10:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 93132 ']' 00:29:04.717 10:10:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 93132 00:29:04.717 10:10:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:29:04.717 10:10:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:04.717 10:10:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93132 00:29:04.717 10:10:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:04.717 10:10:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:04.717 killing process with pid 93132 00:29:04.717 10:10:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93132' 00:29:04.717 10:10:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 93132 00:29:04.717 Received shutdown signal, test time was about 2.000000 seconds 00:29:04.717 00:29:04.717 Latency(us) 00:29:04.717 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:04.717 =================================================================================================================== 00:29:04.717 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:04.717 10:10:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 93132 00:29:04.976 10:10:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:29:04.976 10:10:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:04.976 10:10:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:04.976 10:10:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:04.976 10:10:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:04.976 10:10:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:04.976 10:10:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:04.976 10:10:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93223 00:29:04.976 10:10:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93223 /var/tmp/bperf.sock 00:29:04.976 10:10:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:04.976 10:10:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 93223 ']' 00:29:04.976 10:10:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:04.976 10:10:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:04.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:04.977 10:10:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:04.977 10:10:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:04.977 10:10:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:04.977 [2024-07-15 10:10:18.395945] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:29:04.977 [2024-07-15 10:10:18.396017] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93223 ] 00:29:04.977 [2024-07-15 10:10:18.533776] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:05.238 [2024-07-15 10:10:18.640428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:05.804 10:10:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:05.804 10:10:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:29:05.804 10:10:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:05.804 10:10:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:05.804 10:10:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:06.063 10:10:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:06.063 10:10:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:06.322 nvme0n1 00:29:06.322 10:10:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:06.322 10:10:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:06.322 Running I/O for 2 seconds... 00:29:08.860 00:29:08.860 Latency(us) 00:29:08.860 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:08.860 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:08.860 nvme0n1 : 2.00 29298.46 114.45 0.00 0.00 4363.20 1931.74 12019.70 00:29:08.860 =================================================================================================================== 00:29:08.860 Total : 29298.46 114.45 0.00 0.00 4363.20 1931.74 12019.70 00:29:08.860 0 00:29:08.860 10:10:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:08.860 10:10:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:08.860 10:10:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:08.860 10:10:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:08.860 | select(.opcode=="crc32c") 00:29:08.860 | "\(.module_name) \(.executed)"' 00:29:08.860 10:10:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:08.860 10:10:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:08.860 10:10:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:08.860 10:10:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:08.860 10:10:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:08.860 10:10:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93223 00:29:08.860 10:10:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 93223 ']' 00:29:08.860 10:10:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 93223 00:29:08.860 10:10:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:29:08.860 10:10:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:08.860 10:10:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93223 00:29:08.860 10:10:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:08.860 10:10:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:08.860 killing process with pid 93223 00:29:08.860 10:10:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93223' 00:29:08.860 10:10:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 93223 00:29:08.860 Received shutdown signal, test time was about 2.000000 seconds 00:29:08.860 00:29:08.860 Latency(us) 00:29:08.860 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:08.860 =================================================================================================================== 00:29:08.860 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:08.860 10:10:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 93223 00:29:08.860 10:10:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:29:08.860 10:10:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:08.860 10:10:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:08.860 10:10:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:08.860 10:10:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:08.860 10:10:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:08.860 10:10:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:08.860 10:10:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93308 00:29:08.860 10:10:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93308 /var/tmp/bperf.sock 00:29:08.860 10:10:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:08.860 10:10:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 93308 ']' 00:29:08.860 10:10:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:08.860 10:10:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:08.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:08.860 10:10:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:08.860 10:10:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:08.860 10:10:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:08.860 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:08.860 Zero copy mechanism will not be used. 00:29:08.860 [2024-07-15 10:10:22.392565] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:29:08.860 [2024-07-15 10:10:22.392640] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93308 ] 00:29:09.119 [2024-07-15 10:10:22.519223] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:09.119 [2024-07-15 10:10:22.623962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:09.688 10:10:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:09.688 10:10:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:29:09.688 10:10:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:09.688 10:10:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:09.688 10:10:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:09.947 10:10:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:09.947 10:10:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:10.206 nvme0n1 00:29:10.464 10:10:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:10.465 10:10:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:10.465 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:10.465 Zero copy mechanism will not be used. 00:29:10.465 Running I/O for 2 seconds... 00:29:12.371 00:29:12.371 Latency(us) 00:29:12.371 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:12.371 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:12.371 nvme0n1 : 2.00 8472.76 1059.10 0.00 0.00 1885.15 1273.52 3291.11 00:29:12.371 =================================================================================================================== 00:29:12.371 Total : 8472.76 1059.10 0.00 0.00 1885.15 1273.52 3291.11 00:29:12.371 0 00:29:12.371 10:10:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:12.371 10:10:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:12.371 10:10:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:12.371 10:10:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:12.371 | select(.opcode=="crc32c") 00:29:12.371 | "\(.module_name) \(.executed)"' 00:29:12.372 10:10:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:12.631 10:10:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:12.631 10:10:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:12.632 10:10:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:12.632 10:10:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:12.632 10:10:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93308 00:29:12.632 10:10:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 93308 ']' 00:29:12.632 10:10:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 93308 00:29:12.632 10:10:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:29:12.632 10:10:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:12.632 10:10:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93308 00:29:12.632 10:10:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:12.632 10:10:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:12.632 killing process with pid 93308 00:29:12.632 10:10:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93308' 00:29:12.632 10:10:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 93308 00:29:12.632 Received shutdown signal, test time was about 2.000000 seconds 00:29:12.632 00:29:12.632 Latency(us) 00:29:12.632 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:12.632 =================================================================================================================== 00:29:12.632 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:12.632 10:10:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 93308 00:29:12.891 10:10:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 92994 00:29:12.891 10:10:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 92994 ']' 00:29:12.891 10:10:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 92994 00:29:12.891 10:10:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:29:12.891 10:10:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:12.891 10:10:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92994 00:29:12.891 10:10:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:12.891 10:10:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:12.892 killing process with pid 92994 00:29:12.892 10:10:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92994' 00:29:12.892 10:10:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 92994 00:29:12.892 10:10:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 92994 00:29:13.153 00:29:13.153 real 0m17.319s 00:29:13.153 user 0m32.541s 00:29:13.153 sys 0m4.271s 00:29:13.153 10:10:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:13.153 10:10:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:13.153 ************************************ 00:29:13.153 END TEST nvmf_digest_clean 00:29:13.153 ************************************ 00:29:13.153 10:10:26 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:29:13.153 10:10:26 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:29:13.153 10:10:26 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:13.153 10:10:26 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:13.153 10:10:26 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:13.153 ************************************ 00:29:13.153 START TEST nvmf_digest_error 00:29:13.153 ************************************ 00:29:13.153 10:10:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:29:13.153 10:10:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:29:13.153 10:10:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:13.153 10:10:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:13.153 10:10:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:13.153 10:10:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=93421 00:29:13.153 10:10:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 93421 00:29:13.153 10:10:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:13.153 10:10:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 93421 ']' 00:29:13.153 10:10:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:13.154 10:10:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:13.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:13.154 10:10:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:13.154 10:10:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:13.154 10:10:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:13.154 [2024-07-15 10:10:26.705786] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:29:13.154 [2024-07-15 10:10:26.705860] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:13.413 [2024-07-15 10:10:26.842111] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:13.413 [2024-07-15 10:10:26.943238] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:13.413 [2024-07-15 10:10:26.943292] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:13.413 [2024-07-15 10:10:26.943299] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:13.413 [2024-07-15 10:10:26.943303] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:13.413 [2024-07-15 10:10:26.943307] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:13.413 [2024-07-15 10:10:26.943333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:13.982 10:10:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:13.982 10:10:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:29:13.982 10:10:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:13.982 10:10:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:13.982 10:10:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:14.242 10:10:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:14.242 10:10:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:29:14.242 10:10:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:14.242 10:10:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:14.242 [2024-07-15 10:10:27.602436] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:29:14.242 10:10:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:14.242 10:10:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:29:14.242 10:10:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:29:14.242 10:10:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:14.242 10:10:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:14.242 null0 00:29:14.242 [2024-07-15 10:10:27.697981] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:14.242 [2024-07-15 10:10:27.722017] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:14.242 10:10:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:14.242 10:10:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:29:14.242 10:10:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:14.242 10:10:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:14.242 10:10:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:14.242 10:10:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:14.242 10:10:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93465 00:29:14.242 10:10:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93465 /var/tmp/bperf.sock 00:29:14.242 10:10:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:29:14.242 10:10:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 93465 ']' 00:29:14.242 10:10:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:14.242 10:10:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:14.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:14.242 10:10:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:14.242 10:10:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:14.242 10:10:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:14.242 [2024-07-15 10:10:27.779199] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:29:14.242 [2024-07-15 10:10:27.779274] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93465 ] 00:29:14.502 [2024-07-15 10:10:27.901826] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:14.502 [2024-07-15 10:10:28.006301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:15.440 10:10:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:15.440 10:10:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:29:15.440 10:10:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:15.440 10:10:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:15.440 10:10:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:15.440 10:10:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.440 10:10:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:15.440 10:10:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.440 10:10:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:15.440 10:10:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:15.699 nvme0n1 00:29:15.699 10:10:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:15.699 10:10:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.699 10:10:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:15.699 10:10:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.699 10:10:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:15.700 10:10:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:15.700 Running I/O for 2 seconds... 00:29:15.700 [2024-07-15 10:10:29.249381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:15.700 [2024-07-15 10:10:29.249449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.700 [2024-07-15 10:10:29.249459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.700 [2024-07-15 10:10:29.261204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:15.700 [2024-07-15 10:10:29.261245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:19140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.700 [2024-07-15 10:10:29.261254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.700 [2024-07-15 10:10:29.271716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:15.700 [2024-07-15 10:10:29.271750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.700 [2024-07-15 10:10:29.271757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.700 [2024-07-15 10:10:29.281197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:15.700 [2024-07-15 10:10:29.281245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:24115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.700 [2024-07-15 10:10:29.281257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.959 [2024-07-15 10:10:29.291543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:15.960 [2024-07-15 10:10:29.291585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.960 [2024-07-15 10:10:29.291595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.960 [2024-07-15 10:10:29.302934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:15.960 [2024-07-15 10:10:29.302985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.960 [2024-07-15 10:10:29.302993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.960 [2024-07-15 10:10:29.314248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:15.960 [2024-07-15 10:10:29.314281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.960 [2024-07-15 10:10:29.314290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.960 [2024-07-15 10:10:29.326310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:15.960 [2024-07-15 10:10:29.326344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:14520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.960 [2024-07-15 10:10:29.326353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.960 [2024-07-15 10:10:29.334361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:15.960 [2024-07-15 10:10:29.334392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.960 [2024-07-15 10:10:29.334400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.960 [2024-07-15 10:10:29.345347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:15.960 [2024-07-15 10:10:29.345391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.960 [2024-07-15 10:10:29.345401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.960 [2024-07-15 10:10:29.357356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:15.960 [2024-07-15 10:10:29.357389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.960 [2024-07-15 10:10:29.357397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.960 [2024-07-15 10:10:29.366902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:15.960 [2024-07-15 10:10:29.366932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.960 [2024-07-15 10:10:29.366940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.960 [2024-07-15 10:10:29.378355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:15.960 [2024-07-15 10:10:29.378389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:23334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.960 [2024-07-15 10:10:29.378396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.960 [2024-07-15 10:10:29.388682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:15.960 [2024-07-15 10:10:29.388713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:20443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.960 [2024-07-15 10:10:29.388721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.960 [2024-07-15 10:10:29.399914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:15.960 [2024-07-15 10:10:29.399954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.960 [2024-07-15 10:10:29.399962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.960 [2024-07-15 10:10:29.410376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:15.960 [2024-07-15 10:10:29.410411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.960 [2024-07-15 10:10:29.410419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.960 [2024-07-15 10:10:29.421605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:15.960 [2024-07-15 10:10:29.421637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:11272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.960 [2024-07-15 10:10:29.421644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.960 [2024-07-15 10:10:29.433337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:15.960 [2024-07-15 10:10:29.433372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.960 [2024-07-15 10:10:29.433379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.960 [2024-07-15 10:10:29.443826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:15.960 [2024-07-15 10:10:29.443867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:24132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.960 [2024-07-15 10:10:29.443877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.960 [2024-07-15 10:10:29.456638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:15.960 [2024-07-15 10:10:29.456711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:15069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.960 [2024-07-15 10:10:29.456720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.960 [2024-07-15 10:10:29.465133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:15.960 [2024-07-15 10:10:29.465167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.960 [2024-07-15 10:10:29.465190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.960 [2024-07-15 10:10:29.476143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:15.960 [2024-07-15 10:10:29.476175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:11915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.960 [2024-07-15 10:10:29.476184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.960 [2024-07-15 10:10:29.485599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:15.960 [2024-07-15 10:10:29.485628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:18121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.960 [2024-07-15 10:10:29.485635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.960 [2024-07-15 10:10:29.498742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:15.960 [2024-07-15 10:10:29.498776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:24735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.960 [2024-07-15 10:10:29.498784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.961 [2024-07-15 10:10:29.509925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:15.961 [2024-07-15 10:10:29.509959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.961 [2024-07-15 10:10:29.509967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.961 [2024-07-15 10:10:29.519819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:15.961 [2024-07-15 10:10:29.519853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.961 [2024-07-15 10:10:29.519861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.961 [2024-07-15 10:10:29.528830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:15.961 [2024-07-15 10:10:29.528862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.961 [2024-07-15 10:10:29.528870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.961 [2024-07-15 10:10:29.537725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:15.961 [2024-07-15 10:10:29.537757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.961 [2024-07-15 10:10:29.537765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.221 [2024-07-15 10:10:29.549943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.221 [2024-07-15 10:10:29.549978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.221 [2024-07-15 10:10:29.549987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.221 [2024-07-15 10:10:29.563391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.221 [2024-07-15 10:10:29.563428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.221 [2024-07-15 10:10:29.563437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.221 [2024-07-15 10:10:29.575260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.221 [2024-07-15 10:10:29.575305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:17926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.221 [2024-07-15 10:10:29.575315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.221 [2024-07-15 10:10:29.585763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.221 [2024-07-15 10:10:29.585804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.221 [2024-07-15 10:10:29.585812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.221 [2024-07-15 10:10:29.595790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.221 [2024-07-15 10:10:29.595823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:13826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.221 [2024-07-15 10:10:29.595832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.221 [2024-07-15 10:10:29.606261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.221 [2024-07-15 10:10:29.606298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.221 [2024-07-15 10:10:29.606306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.221 [2024-07-15 10:10:29.618685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.221 [2024-07-15 10:10:29.618724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:25487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.221 [2024-07-15 10:10:29.618732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.221 [2024-07-15 10:10:29.630117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.221 [2024-07-15 10:10:29.630158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:4685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.221 [2024-07-15 10:10:29.630167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.221 [2024-07-15 10:10:29.640814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.221 [2024-07-15 10:10:29.640852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:14379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.221 [2024-07-15 10:10:29.640862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.221 [2024-07-15 10:10:29.650774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.221 [2024-07-15 10:10:29.650810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.221 [2024-07-15 10:10:29.650819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.221 [2024-07-15 10:10:29.662570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.221 [2024-07-15 10:10:29.662618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.221 [2024-07-15 10:10:29.662630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.221 [2024-07-15 10:10:29.674367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.221 [2024-07-15 10:10:29.674408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:1797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.221 [2024-07-15 10:10:29.674417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.221 [2024-07-15 10:10:29.685322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.221 [2024-07-15 10:10:29.685361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.221 [2024-07-15 10:10:29.685370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.221 [2024-07-15 10:10:29.696050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.221 [2024-07-15 10:10:29.696086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.221 [2024-07-15 10:10:29.696111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.221 [2024-07-15 10:10:29.708617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.221 [2024-07-15 10:10:29.708667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:25248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.221 [2024-07-15 10:10:29.708676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.221 [2024-07-15 10:10:29.718109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.221 [2024-07-15 10:10:29.718161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.221 [2024-07-15 10:10:29.718173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.221 [2024-07-15 10:10:29.729980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.221 [2024-07-15 10:10:29.730022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:22371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.221 [2024-07-15 10:10:29.730032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.221 [2024-07-15 10:10:29.740789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.221 [2024-07-15 10:10:29.740828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.221 [2024-07-15 10:10:29.740838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.221 [2024-07-15 10:10:29.752992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.221 [2024-07-15 10:10:29.753034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.221 [2024-07-15 10:10:29.753044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.221 [2024-07-15 10:10:29.764379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.221 [2024-07-15 10:10:29.764424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.221 [2024-07-15 10:10:29.764433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.221 [2024-07-15 10:10:29.776272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.221 [2024-07-15 10:10:29.776326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.221 [2024-07-15 10:10:29.776338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.222 [2024-07-15 10:10:29.789628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.222 [2024-07-15 10:10:29.789685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.222 [2024-07-15 10:10:29.789695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.222 [2024-07-15 10:10:29.800706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.222 [2024-07-15 10:10:29.800745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:5962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.222 [2024-07-15 10:10:29.800754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.481 [2024-07-15 10:10:29.811780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.481 [2024-07-15 10:10:29.811820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.481 [2024-07-15 10:10:29.811828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.481 [2024-07-15 10:10:29.821804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.481 [2024-07-15 10:10:29.821849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:11354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.481 [2024-07-15 10:10:29.821859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.481 [2024-07-15 10:10:29.834892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.481 [2024-07-15 10:10:29.834937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:24585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.481 [2024-07-15 10:10:29.834946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.481 [2024-07-15 10:10:29.847368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.481 [2024-07-15 10:10:29.847416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.481 [2024-07-15 10:10:29.847443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.481 [2024-07-15 10:10:29.856944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.481 [2024-07-15 10:10:29.856985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.481 [2024-07-15 10:10:29.857010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.481 [2024-07-15 10:10:29.870653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.481 [2024-07-15 10:10:29.870714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:2949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.481 [2024-07-15 10:10:29.870724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.481 [2024-07-15 10:10:29.881564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.481 [2024-07-15 10:10:29.881607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.481 [2024-07-15 10:10:29.881632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.481 [2024-07-15 10:10:29.893118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.481 [2024-07-15 10:10:29.893158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.481 [2024-07-15 10:10:29.893168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.481 [2024-07-15 10:10:29.903283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.481 [2024-07-15 10:10:29.903318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:13365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.481 [2024-07-15 10:10:29.903326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.481 [2024-07-15 10:10:29.913047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.481 [2024-07-15 10:10:29.913082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.481 [2024-07-15 10:10:29.913105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.481 [2024-07-15 10:10:29.922903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.481 [2024-07-15 10:10:29.922936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:15716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.481 [2024-07-15 10:10:29.922944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.481 [2024-07-15 10:10:29.935962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.481 [2024-07-15 10:10:29.936006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:8579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.481 [2024-07-15 10:10:29.936016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.481 [2024-07-15 10:10:29.947644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.481 [2024-07-15 10:10:29.947687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:18837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.481 [2024-07-15 10:10:29.947696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.481 [2024-07-15 10:10:29.957083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.482 [2024-07-15 10:10:29.957119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.482 [2024-07-15 10:10:29.957126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.482 [2024-07-15 10:10:29.968750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.482 [2024-07-15 10:10:29.968788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.482 [2024-07-15 10:10:29.968797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.482 [2024-07-15 10:10:29.979885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.482 [2024-07-15 10:10:29.979920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:5973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.482 [2024-07-15 10:10:29.979927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.482 [2024-07-15 10:10:29.990655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.482 [2024-07-15 10:10:29.990698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:18691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.482 [2024-07-15 10:10:29.990706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.482 [2024-07-15 10:10:30.003314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.482 [2024-07-15 10:10:30.003356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.482 [2024-07-15 10:10:30.003367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.482 [2024-07-15 10:10:30.012931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.482 [2024-07-15 10:10:30.012970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:10075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.482 [2024-07-15 10:10:30.012979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.482 [2024-07-15 10:10:30.024749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.482 [2024-07-15 10:10:30.024788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.482 [2024-07-15 10:10:30.024796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.482 [2024-07-15 10:10:30.034782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.482 [2024-07-15 10:10:30.034814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.482 [2024-07-15 10:10:30.034821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.482 [2024-07-15 10:10:30.044834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.482 [2024-07-15 10:10:30.044868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.482 [2024-07-15 10:10:30.044876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.482 [2024-07-15 10:10:30.057318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.482 [2024-07-15 10:10:30.057359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.482 [2024-07-15 10:10:30.057369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.742 [2024-07-15 10:10:30.069386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.742 [2024-07-15 10:10:30.069434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:11568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.742 [2024-07-15 10:10:30.069445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.742 [2024-07-15 10:10:30.080792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.742 [2024-07-15 10:10:30.080832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:6507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.742 [2024-07-15 10:10:30.080842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.742 [2024-07-15 10:10:30.089993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.742 [2024-07-15 10:10:30.090030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:14226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.742 [2024-07-15 10:10:30.090039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.742 [2024-07-15 10:10:30.100975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.742 [2024-07-15 10:10:30.101011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:25164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.742 [2024-07-15 10:10:30.101019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.742 [2024-07-15 10:10:30.111037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.742 [2024-07-15 10:10:30.111072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.742 [2024-07-15 10:10:30.111080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.742 [2024-07-15 10:10:30.123327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.742 [2024-07-15 10:10:30.123370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:21387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.742 [2024-07-15 10:10:30.123380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.742 [2024-07-15 10:10:30.135326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.742 [2024-07-15 10:10:30.135359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.742 [2024-07-15 10:10:30.135366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.742 [2024-07-15 10:10:30.145992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.742 [2024-07-15 10:10:30.146022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.742 [2024-07-15 10:10:30.146030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.742 [2024-07-15 10:10:30.155759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.742 [2024-07-15 10:10:30.155792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.742 [2024-07-15 10:10:30.155799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.742 [2024-07-15 10:10:30.168571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.742 [2024-07-15 10:10:30.168612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.742 [2024-07-15 10:10:30.168621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.742 [2024-07-15 10:10:30.179080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.742 [2024-07-15 10:10:30.179127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.742 [2024-07-15 10:10:30.179138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.742 [2024-07-15 10:10:30.191679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.742 [2024-07-15 10:10:30.191718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:2449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.742 [2024-07-15 10:10:30.191727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.742 [2024-07-15 10:10:30.201581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.742 [2024-07-15 10:10:30.201624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:25379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.742 [2024-07-15 10:10:30.201632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.742 [2024-07-15 10:10:30.211803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.742 [2024-07-15 10:10:30.211838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:21998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.742 [2024-07-15 10:10:30.211847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.742 [2024-07-15 10:10:30.222683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.742 [2024-07-15 10:10:30.222710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:2350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.742 [2024-07-15 10:10:30.222717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.742 [2024-07-15 10:10:30.232652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.742 [2024-07-15 10:10:30.232722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:25107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.742 [2024-07-15 10:10:30.232732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.742 [2024-07-15 10:10:30.243881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.742 [2024-07-15 10:10:30.243939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:24377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.742 [2024-07-15 10:10:30.243949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.742 [2024-07-15 10:10:30.256755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.742 [2024-07-15 10:10:30.256790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.742 [2024-07-15 10:10:30.256798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.742 [2024-07-15 10:10:30.267824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.742 [2024-07-15 10:10:30.267875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.742 [2024-07-15 10:10:30.267886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.742 [2024-07-15 10:10:30.279029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.742 [2024-07-15 10:10:30.279062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.742 [2024-07-15 10:10:30.279070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.742 [2024-07-15 10:10:30.288597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.742 [2024-07-15 10:10:30.288639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:10722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.742 [2024-07-15 10:10:30.288649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.742 [2024-07-15 10:10:30.300530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.742 [2024-07-15 10:10:30.300578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.742 [2024-07-15 10:10:30.300589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.742 [2024-07-15 10:10:30.309462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.742 [2024-07-15 10:10:30.309497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.742 [2024-07-15 10:10:30.309520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.742 [2024-07-15 10:10:30.321390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:16.742 [2024-07-15 10:10:30.321428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:10419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.742 [2024-07-15 10:10:30.321438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.002 [2024-07-15 10:10:30.332807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.002 [2024-07-15 10:10:30.332845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:23520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.002 [2024-07-15 10:10:30.332853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.002 [2024-07-15 10:10:30.344211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.002 [2024-07-15 10:10:30.344246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:4436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.002 [2024-07-15 10:10:30.344254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.002 [2024-07-15 10:10:30.354747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.002 [2024-07-15 10:10:30.354779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:25588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.002 [2024-07-15 10:10:30.354787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.002 [2024-07-15 10:10:30.365117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.002 [2024-07-15 10:10:30.365148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.002 [2024-07-15 10:10:30.365156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.002 [2024-07-15 10:10:30.376945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.002 [2024-07-15 10:10:30.376990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.002 [2024-07-15 10:10:30.377000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.002 [2024-07-15 10:10:30.386900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.002 [2024-07-15 10:10:30.386932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.002 [2024-07-15 10:10:30.386940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.002 [2024-07-15 10:10:30.396543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.002 [2024-07-15 10:10:30.396582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:17098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.002 [2024-07-15 10:10:30.396592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.002 [2024-07-15 10:10:30.408545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.002 [2024-07-15 10:10:30.408585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.002 [2024-07-15 10:10:30.408594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.002 [2024-07-15 10:10:30.419235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.002 [2024-07-15 10:10:30.419270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:11331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.002 [2024-07-15 10:10:30.419277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.002 [2024-07-15 10:10:30.430925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.002 [2024-07-15 10:10:30.430966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.002 [2024-07-15 10:10:30.430975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.002 [2024-07-15 10:10:30.443198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.002 [2024-07-15 10:10:30.443248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:10933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.002 [2024-07-15 10:10:30.443259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.002 [2024-07-15 10:10:30.454425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.002 [2024-07-15 10:10:30.454459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.002 [2024-07-15 10:10:30.454468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.002 [2024-07-15 10:10:30.463520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.002 [2024-07-15 10:10:30.463553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.002 [2024-07-15 10:10:30.463560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.002 [2024-07-15 10:10:30.474777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.002 [2024-07-15 10:10:30.474806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.002 [2024-07-15 10:10:30.474814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.002 [2024-07-15 10:10:30.483453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.002 [2024-07-15 10:10:30.483484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:16155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.002 [2024-07-15 10:10:30.483492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.002 [2024-07-15 10:10:30.495821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.002 [2024-07-15 10:10:30.495860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.002 [2024-07-15 10:10:30.495868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.002 [2024-07-15 10:10:30.507363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.002 [2024-07-15 10:10:30.507401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:19485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.002 [2024-07-15 10:10:30.507410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.002 [2024-07-15 10:10:30.516139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.002 [2024-07-15 10:10:30.516170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.003 [2024-07-15 10:10:30.516178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.003 [2024-07-15 10:10:30.525830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.003 [2024-07-15 10:10:30.525860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:5909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.003 [2024-07-15 10:10:30.525868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.003 [2024-07-15 10:10:30.536076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.003 [2024-07-15 10:10:30.536107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.003 [2024-07-15 10:10:30.536115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.003 [2024-07-15 10:10:30.544944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.003 [2024-07-15 10:10:30.544978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.003 [2024-07-15 10:10:30.544985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.003 [2024-07-15 10:10:30.555266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.003 [2024-07-15 10:10:30.555308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:16142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.003 [2024-07-15 10:10:30.555319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.003 [2024-07-15 10:10:30.566665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.003 [2024-07-15 10:10:30.566707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.003 [2024-07-15 10:10:30.566716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.003 [2024-07-15 10:10:30.577836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.003 [2024-07-15 10:10:30.577868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:25126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.003 [2024-07-15 10:10:30.577876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.263 [2024-07-15 10:10:30.588985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.263 [2024-07-15 10:10:30.589024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.263 [2024-07-15 10:10:30.589033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.263 [2024-07-15 10:10:30.599486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.263 [2024-07-15 10:10:30.599520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.263 [2024-07-15 10:10:30.599544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.263 [2024-07-15 10:10:30.610582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.263 [2024-07-15 10:10:30.610620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:1169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.263 [2024-07-15 10:10:30.610628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.263 [2024-07-15 10:10:30.618646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.263 [2024-07-15 10:10:30.618685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:7798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.263 [2024-07-15 10:10:30.618693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.263 [2024-07-15 10:10:30.631071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.263 [2024-07-15 10:10:30.631106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.263 [2024-07-15 10:10:30.631130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.263 [2024-07-15 10:10:30.642241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.263 [2024-07-15 10:10:30.642286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:24134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.263 [2024-07-15 10:10:30.642296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.263 [2024-07-15 10:10:30.651964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.263 [2024-07-15 10:10:30.652001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.263 [2024-07-15 10:10:30.652009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.263 [2024-07-15 10:10:30.664087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.263 [2024-07-15 10:10:30.664131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:3929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.263 [2024-07-15 10:10:30.664140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.263 [2024-07-15 10:10:30.674158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.263 [2024-07-15 10:10:30.674194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.263 [2024-07-15 10:10:30.674203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.263 [2024-07-15 10:10:30.685227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.263 [2024-07-15 10:10:30.685279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.263 [2024-07-15 10:10:30.685291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.263 [2024-07-15 10:10:30.697254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.263 [2024-07-15 10:10:30.697295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.263 [2024-07-15 10:10:30.697303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.263 [2024-07-15 10:10:30.709204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.263 [2024-07-15 10:10:30.709243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.263 [2024-07-15 10:10:30.709251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.263 [2024-07-15 10:10:30.719809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.263 [2024-07-15 10:10:30.719845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:21811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.263 [2024-07-15 10:10:30.719853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.264 [2024-07-15 10:10:30.731038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.264 [2024-07-15 10:10:30.731076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.264 [2024-07-15 10:10:30.731100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.264 [2024-07-15 10:10:30.740357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.264 [2024-07-15 10:10:30.740401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.264 [2024-07-15 10:10:30.740409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.264 [2024-07-15 10:10:30.752159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.264 [2024-07-15 10:10:30.752193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.264 [2024-07-15 10:10:30.752200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.264 [2024-07-15 10:10:30.763737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.264 [2024-07-15 10:10:30.763772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.264 [2024-07-15 10:10:30.763779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.264 [2024-07-15 10:10:30.773740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.264 [2024-07-15 10:10:30.773773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.264 [2024-07-15 10:10:30.773780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.264 [2024-07-15 10:10:30.785879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.264 [2024-07-15 10:10:30.785912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.264 [2024-07-15 10:10:30.785920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.264 [2024-07-15 10:10:30.795784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.264 [2024-07-15 10:10:30.795818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:24040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.264 [2024-07-15 10:10:30.795825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.264 [2024-07-15 10:10:30.808770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.264 [2024-07-15 10:10:30.808816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.264 [2024-07-15 10:10:30.808826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.264 [2024-07-15 10:10:30.821550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.264 [2024-07-15 10:10:30.821601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.264 [2024-07-15 10:10:30.821611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.264 [2024-07-15 10:10:30.834474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.264 [2024-07-15 10:10:30.834521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.264 [2024-07-15 10:10:30.834530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.264 [2024-07-15 10:10:30.846246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.264 [2024-07-15 10:10:30.846291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:11643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.264 [2024-07-15 10:10:30.846302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.524 [2024-07-15 10:10:30.857406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.524 [2024-07-15 10:10:30.857454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.524 [2024-07-15 10:10:30.857465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.524 [2024-07-15 10:10:30.869074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.524 [2024-07-15 10:10:30.869118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.524 [2024-07-15 10:10:30.869126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.524 [2024-07-15 10:10:30.882384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.524 [2024-07-15 10:10:30.882433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.524 [2024-07-15 10:10:30.882443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.524 [2024-07-15 10:10:30.894624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.524 [2024-07-15 10:10:30.894676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:18628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.524 [2024-07-15 10:10:30.894686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.524 [2024-07-15 10:10:30.905854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.524 [2024-07-15 10:10:30.905892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:21514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.524 [2024-07-15 10:10:30.905900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.524 [2024-07-15 10:10:30.918085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.524 [2024-07-15 10:10:30.918129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:17192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.524 [2024-07-15 10:10:30.918138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.524 [2024-07-15 10:10:30.929594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.524 [2024-07-15 10:10:30.929636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.524 [2024-07-15 10:10:30.929645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.524 [2024-07-15 10:10:30.938934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.524 [2024-07-15 10:10:30.938968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:8483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.524 [2024-07-15 10:10:30.938976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.524 [2024-07-15 10:10:30.951654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.524 [2024-07-15 10:10:30.951698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:22563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.524 [2024-07-15 10:10:30.951723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.524 [2024-07-15 10:10:30.963474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.524 [2024-07-15 10:10:30.963518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.524 [2024-07-15 10:10:30.963526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.524 [2024-07-15 10:10:30.975367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.524 [2024-07-15 10:10:30.975404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.524 [2024-07-15 10:10:30.975412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.524 [2024-07-15 10:10:30.986038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.525 [2024-07-15 10:10:30.986070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:10543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.525 [2024-07-15 10:10:30.986078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.525 [2024-07-15 10:10:30.995393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.525 [2024-07-15 10:10:30.995428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:14666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.525 [2024-07-15 10:10:30.995437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.525 [2024-07-15 10:10:31.006378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.525 [2024-07-15 10:10:31.006429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.525 [2024-07-15 10:10:31.006440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.525 [2024-07-15 10:10:31.017369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.525 [2024-07-15 10:10:31.017408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.525 [2024-07-15 10:10:31.017433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.525 [2024-07-15 10:10:31.028259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.525 [2024-07-15 10:10:31.028289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:9898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.525 [2024-07-15 10:10:31.028297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.525 [2024-07-15 10:10:31.037758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.525 [2024-07-15 10:10:31.037789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:7996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.525 [2024-07-15 10:10:31.037796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.525 [2024-07-15 10:10:31.048248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.525 [2024-07-15 10:10:31.048281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:23239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.525 [2024-07-15 10:10:31.048290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.525 [2024-07-15 10:10:31.057977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.525 [2024-07-15 10:10:31.058008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.525 [2024-07-15 10:10:31.058016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.525 [2024-07-15 10:10:31.068108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.525 [2024-07-15 10:10:31.068155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:10375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.525 [2024-07-15 10:10:31.068167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.525 [2024-07-15 10:10:31.080458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.525 [2024-07-15 10:10:31.080501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:14441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.525 [2024-07-15 10:10:31.080510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.525 [2024-07-15 10:10:31.092209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.525 [2024-07-15 10:10:31.092253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:24836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.525 [2024-07-15 10:10:31.092262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.525 [2024-07-15 10:10:31.103874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.525 [2024-07-15 10:10:31.103915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.525 [2024-07-15 10:10:31.103924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.784 [2024-07-15 10:10:31.114739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.784 [2024-07-15 10:10:31.114777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.784 [2024-07-15 10:10:31.114801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.784 [2024-07-15 10:10:31.125754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.784 [2024-07-15 10:10:31.125793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:16368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.784 [2024-07-15 10:10:31.125802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.784 [2024-07-15 10:10:31.137319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.784 [2024-07-15 10:10:31.137375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.784 [2024-07-15 10:10:31.137392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.784 [2024-07-15 10:10:31.146364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.784 [2024-07-15 10:10:31.146400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:11730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.784 [2024-07-15 10:10:31.146409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.784 [2024-07-15 10:10:31.157530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.784 [2024-07-15 10:10:31.157564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.784 [2024-07-15 10:10:31.157573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.784 [2024-07-15 10:10:31.166360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.784 [2024-07-15 10:10:31.166392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.784 [2024-07-15 10:10:31.166399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.784 [2024-07-15 10:10:31.176618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.784 [2024-07-15 10:10:31.176648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.784 [2024-07-15 10:10:31.176656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.784 [2024-07-15 10:10:31.188183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.784 [2024-07-15 10:10:31.188221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.784 [2024-07-15 10:10:31.188229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.784 [2024-07-15 10:10:31.198542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.784 [2024-07-15 10:10:31.198586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:25026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.784 [2024-07-15 10:10:31.198595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.784 [2024-07-15 10:10:31.208313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.784 [2024-07-15 10:10:31.208350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.785 [2024-07-15 10:10:31.208358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.785 [2024-07-15 10:10:31.220907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.785 [2024-07-15 10:10:31.220950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:16020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.785 [2024-07-15 10:10:31.220958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.785 [2024-07-15 10:10:31.230980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fe63e0) 00:29:17.785 [2024-07-15 10:10:31.231017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:20636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.785 [2024-07-15 10:10:31.231025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.785 00:29:17.785 Latency(us) 00:29:17.785 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:17.785 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:17.785 nvme0n1 : 2.00 23110.85 90.28 0.00 0.00 5532.05 2761.67 16598.64 00:29:17.785 =================================================================================================================== 00:29:17.785 Total : 23110.85 90.28 0.00 0.00 5532.05 2761.67 16598.64 00:29:17.785 0 00:29:17.785 10:10:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:17.785 10:10:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:17.785 10:10:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:17.785 10:10:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:17.785 | .driver_specific 00:29:17.785 | .nvme_error 00:29:17.785 | .status_code 00:29:17.785 | .command_transient_transport_error' 00:29:18.045 10:10:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 181 > 0 )) 00:29:18.045 10:10:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93465 00:29:18.045 10:10:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 93465 ']' 00:29:18.045 10:10:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 93465 00:29:18.045 10:10:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:18.045 10:10:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:18.045 10:10:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93465 00:29:18.045 10:10:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:18.045 10:10:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:18.045 killing process with pid 93465 00:29:18.045 10:10:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93465' 00:29:18.045 10:10:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 93465 00:29:18.045 Received shutdown signal, test time was about 2.000000 seconds 00:29:18.045 00:29:18.045 Latency(us) 00:29:18.045 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:18.045 =================================================================================================================== 00:29:18.045 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:18.045 10:10:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 93465 00:29:18.304 10:10:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:29:18.304 10:10:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:18.304 10:10:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:18.304 10:10:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:18.304 10:10:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:18.304 10:10:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93550 00:29:18.304 10:10:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93550 /var/tmp/bperf.sock 00:29:18.304 10:10:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:18.304 10:10:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 93550 ']' 00:29:18.304 10:10:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:18.304 10:10:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:18.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:18.304 10:10:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:18.304 10:10:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:18.304 10:10:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:18.304 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:18.304 Zero copy mechanism will not be used. 00:29:18.304 [2024-07-15 10:10:31.780951] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:29:18.304 [2024-07-15 10:10:31.781029] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93550 ] 00:29:18.563 [2024-07-15 10:10:31.917912] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:18.563 [2024-07-15 10:10:32.026368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:19.132 10:10:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:19.132 10:10:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:29:19.132 10:10:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:19.132 10:10:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:19.392 10:10:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:19.392 10:10:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.392 10:10:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:19.392 10:10:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.392 10:10:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:19.392 10:10:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:19.651 nvme0n1 00:29:19.912 10:10:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:19.912 10:10:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.912 10:10:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:19.912 10:10:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.912 10:10:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:19.912 10:10:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:19.912 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:19.912 Zero copy mechanism will not be used. 00:29:19.912 Running I/O for 2 seconds... 00:29:19.912 [2024-07-15 10:10:33.366641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:19.912 [2024-07-15 10:10:33.366714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.912 [2024-07-15 10:10:33.366725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:19.912 [2024-07-15 10:10:33.370635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:19.912 [2024-07-15 10:10:33.370688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.912 [2024-07-15 10:10:33.370698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:19.912 [2024-07-15 10:10:33.374624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:19.912 [2024-07-15 10:10:33.374672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.912 [2024-07-15 10:10:33.374681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.912 [2024-07-15 10:10:33.378271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:19.912 [2024-07-15 10:10:33.378305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.912 [2024-07-15 10:10:33.378313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.912 [2024-07-15 10:10:33.380345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:19.912 [2024-07-15 10:10:33.380388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.912 [2024-07-15 10:10:33.380396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:19.912 [2024-07-15 10:10:33.384534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:19.912 [2024-07-15 10:10:33.384575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.912 [2024-07-15 10:10:33.384585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:19.912 [2024-07-15 10:10:33.388528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:19.912 [2024-07-15 10:10:33.388568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.912 [2024-07-15 10:10:33.388578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.912 [2024-07-15 10:10:33.392093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:19.912 [2024-07-15 10:10:33.392129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.912 [2024-07-15 10:10:33.392137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.912 [2024-07-15 10:10:33.394988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:19.912 [2024-07-15 10:10:33.395023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.912 [2024-07-15 10:10:33.395032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:19.912 [2024-07-15 10:10:33.399005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:19.912 [2024-07-15 10:10:33.399047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.912 [2024-07-15 10:10:33.399057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:19.912 [2024-07-15 10:10:33.403073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:19.912 [2024-07-15 10:10:33.403109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.912 [2024-07-15 10:10:33.403117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.912 [2024-07-15 10:10:33.405341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:19.912 [2024-07-15 10:10:33.405376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.912 [2024-07-15 10:10:33.405384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.912 [2024-07-15 10:10:33.409176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:19.912 [2024-07-15 10:10:33.409210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.912 [2024-07-15 10:10:33.409218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:19.912 [2024-07-15 10:10:33.411675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:19.912 [2024-07-15 10:10:33.411723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.912 [2024-07-15 10:10:33.411733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:19.913 [2024-07-15 10:10:33.415580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:19.913 [2024-07-15 10:10:33.415622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.913 [2024-07-15 10:10:33.415632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.913 [2024-07-15 10:10:33.419339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:19.913 [2024-07-15 10:10:33.419376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.913 [2024-07-15 10:10:33.419383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.913 [2024-07-15 10:10:33.423195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:19.913 [2024-07-15 10:10:33.423229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.913 [2024-07-15 10:10:33.423237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:19.913 [2024-07-15 10:10:33.425934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:19.913 [2024-07-15 10:10:33.425975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.913 [2024-07-15 10:10:33.425983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:19.913 [2024-07-15 10:10:33.429200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:19.913 [2024-07-15 10:10:33.429238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.913 [2024-07-15 10:10:33.429250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.913 [2024-07-15 10:10:33.432527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:19.913 [2024-07-15 10:10:33.432558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.913 [2024-07-15 10:10:33.432567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.913 [2024-07-15 10:10:33.435950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:19.913 [2024-07-15 10:10:33.435981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.913 [2024-07-15 10:10:33.435988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:19.913 [2024-07-15 10:10:33.439262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:19.913 [2024-07-15 10:10:33.439294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.913 [2024-07-15 10:10:33.439301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:19.913 [2024-07-15 10:10:33.442724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:19.913 [2024-07-15 10:10:33.442757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.913 [2024-07-15 10:10:33.442763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.913 [2024-07-15 10:10:33.445892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:19.913 [2024-07-15 10:10:33.445926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.913 [2024-07-15 10:10:33.445933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.913 [2024-07-15 10:10:33.448506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:19.913 [2024-07-15 10:10:33.448536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.913 [2024-07-15 10:10:33.448543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:19.913 [2024-07-15 10:10:33.451954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:19.913 [2024-07-15 10:10:33.451987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.913 [2024-07-15 10:10:33.451994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:19.913 [2024-07-15 10:10:33.455795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:19.913 [2024-07-15 10:10:33.455826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.913 [2024-07-15 10:10:33.455833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.913 [2024-07-15 10:10:33.459571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:19.913 [2024-07-15 10:10:33.459603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.913 [2024-07-15 10:10:33.459611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.913 [2024-07-15 10:10:33.462269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:19.913 [2024-07-15 10:10:33.462301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.913 [2024-07-15 10:10:33.462309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:19.913 [2024-07-15 10:10:33.465750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:19.913 [2024-07-15 10:10:33.465798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.913 [2024-07-15 10:10:33.465809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:19.913 [2024-07-15 10:10:33.470152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:19.913 [2024-07-15 10:10:33.470193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.913 [2024-07-15 10:10:33.470203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.913 [2024-07-15 10:10:33.473989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:19.913 [2024-07-15 10:10:33.474023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.913 [2024-07-15 10:10:33.474047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.913 [2024-07-15 10:10:33.477886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:19.913 [2024-07-15 10:10:33.477920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.913 [2024-07-15 10:10:33.477928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:19.913 [2024-07-15 10:10:33.481516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:19.913 [2024-07-15 10:10:33.481552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.913 [2024-07-15 10:10:33.481559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:19.913 [2024-07-15 10:10:33.485239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:19.913 [2024-07-15 10:10:33.485275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.913 [2024-07-15 10:10:33.485299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:19.913 [2024-07-15 10:10:33.489122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:19.913 [2024-07-15 10:10:33.489156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.913 [2024-07-15 10:10:33.489164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.913 [2024-07-15 10:10:33.492946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:19.913 [2024-07-15 10:10:33.492982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.913 [2024-07-15 10:10:33.492991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.177 [2024-07-15 10:10:33.496853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.177 [2024-07-15 10:10:33.496888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.177 [2024-07-15 10:10:33.496897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.177 [2024-07-15 10:10:33.500486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.177 [2024-07-15 10:10:33.500520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.177 [2024-07-15 10:10:33.500528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.177 [2024-07-15 10:10:33.503751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.177 [2024-07-15 10:10:33.503780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.177 [2024-07-15 10:10:33.503788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.177 [2024-07-15 10:10:33.507523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.177 [2024-07-15 10:10:33.507558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.177 [2024-07-15 10:10:33.507565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.177 [2024-07-15 10:10:33.510966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.177 [2024-07-15 10:10:33.511006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.177 [2024-07-15 10:10:33.511016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.177 [2024-07-15 10:10:33.514752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.177 [2024-07-15 10:10:33.514786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.177 [2024-07-15 10:10:33.514794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.177 [2024-07-15 10:10:33.517729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.177 [2024-07-15 10:10:33.517762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.177 [2024-07-15 10:10:33.517770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.177 [2024-07-15 10:10:33.520693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.177 [2024-07-15 10:10:33.520720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.177 [2024-07-15 10:10:33.520728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.177 [2024-07-15 10:10:33.523779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.177 [2024-07-15 10:10:33.523809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.177 [2024-07-15 10:10:33.523817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.177 [2024-07-15 10:10:33.527250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.177 [2024-07-15 10:10:33.527282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.177 [2024-07-15 10:10:33.527290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.177 [2024-07-15 10:10:33.530461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.177 [2024-07-15 10:10:33.530505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.177 [2024-07-15 10:10:33.530516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.177 [2024-07-15 10:10:33.534468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.177 [2024-07-15 10:10:33.534516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.177 [2024-07-15 10:10:33.534527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.177 [2024-07-15 10:10:33.538246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.177 [2024-07-15 10:10:33.538287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.177 [2024-07-15 10:10:33.538297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.177 [2024-07-15 10:10:33.540696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.177 [2024-07-15 10:10:33.540725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.177 [2024-07-15 10:10:33.540732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.177 [2024-07-15 10:10:33.544304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.177 [2024-07-15 10:10:33.544342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.177 [2024-07-15 10:10:33.544351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.177 [2024-07-15 10:10:33.547630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.177 [2024-07-15 10:10:33.547669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.177 [2024-07-15 10:10:33.547678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.177 [2024-07-15 10:10:33.550691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.177 [2024-07-15 10:10:33.550722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.177 [2024-07-15 10:10:33.550730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.177 [2024-07-15 10:10:33.553955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.178 [2024-07-15 10:10:33.553989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.178 [2024-07-15 10:10:33.553997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.178 [2024-07-15 10:10:33.556782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.178 [2024-07-15 10:10:33.556814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.178 [2024-07-15 10:10:33.556822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.178 [2024-07-15 10:10:33.559453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.178 [2024-07-15 10:10:33.559485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.178 [2024-07-15 10:10:33.559492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.178 [2024-07-15 10:10:33.562515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.178 [2024-07-15 10:10:33.562548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.178 [2024-07-15 10:10:33.562555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.178 [2024-07-15 10:10:33.565626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.178 [2024-07-15 10:10:33.565686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.178 [2024-07-15 10:10:33.565695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.178 [2024-07-15 10:10:33.568402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.178 [2024-07-15 10:10:33.568431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.178 [2024-07-15 10:10:33.568438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.178 [2024-07-15 10:10:33.571412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.178 [2024-07-15 10:10:33.571443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.178 [2024-07-15 10:10:33.571451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.178 [2024-07-15 10:10:33.574141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.178 [2024-07-15 10:10:33.574172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.178 [2024-07-15 10:10:33.574180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.178 [2024-07-15 10:10:33.576487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.178 [2024-07-15 10:10:33.576517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.178 [2024-07-15 10:10:33.576525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.178 [2024-07-15 10:10:33.580171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.178 [2024-07-15 10:10:33.580208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.178 [2024-07-15 10:10:33.580234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.178 [2024-07-15 10:10:33.583793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.178 [2024-07-15 10:10:33.583826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.178 [2024-07-15 10:10:33.583834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.178 [2024-07-15 10:10:33.586358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.178 [2024-07-15 10:10:33.586390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.178 [2024-07-15 10:10:33.586398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.178 [2024-07-15 10:10:33.589950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.178 [2024-07-15 10:10:33.589982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.178 [2024-07-15 10:10:33.589989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.178 [2024-07-15 10:10:33.593842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.178 [2024-07-15 10:10:33.593878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.178 [2024-07-15 10:10:33.593886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.178 [2024-07-15 10:10:33.597608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.178 [2024-07-15 10:10:33.597668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.178 [2024-07-15 10:10:33.597678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.178 [2024-07-15 10:10:33.600757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.178 [2024-07-15 10:10:33.600791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.178 [2024-07-15 10:10:33.600816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.178 [2024-07-15 10:10:33.604361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.178 [2024-07-15 10:10:33.604420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.178 [2024-07-15 10:10:33.604428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.178 [2024-07-15 10:10:33.608187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.178 [2024-07-15 10:10:33.608219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.178 [2024-07-15 10:10:33.608226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.178 [2024-07-15 10:10:33.611843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.178 [2024-07-15 10:10:33.611874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.178 [2024-07-15 10:10:33.611883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.178 [2024-07-15 10:10:33.614180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.178 [2024-07-15 10:10:33.614213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.178 [2024-07-15 10:10:33.614221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.178 [2024-07-15 10:10:33.617571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.178 [2024-07-15 10:10:33.617605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.178 [2024-07-15 10:10:33.617613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.178 [2024-07-15 10:10:33.621281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.178 [2024-07-15 10:10:33.621316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.178 [2024-07-15 10:10:33.621340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.178 [2024-07-15 10:10:33.625644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.178 [2024-07-15 10:10:33.625690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.178 [2024-07-15 10:10:33.625700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.178 [2024-07-15 10:10:33.629531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.178 [2024-07-15 10:10:33.629566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.178 [2024-07-15 10:10:33.629589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.178 [2024-07-15 10:10:33.631727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.178 [2024-07-15 10:10:33.631757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.178 [2024-07-15 10:10:33.631764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.178 [2024-07-15 10:10:33.635460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.178 [2024-07-15 10:10:33.635494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.178 [2024-07-15 10:10:33.635502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.178 [2024-07-15 10:10:33.639079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.178 [2024-07-15 10:10:33.639115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.178 [2024-07-15 10:10:33.639139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.178 [2024-07-15 10:10:33.641938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.178 [2024-07-15 10:10:33.641968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.178 [2024-07-15 10:10:33.641976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.178 [2024-07-15 10:10:33.645332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.178 [2024-07-15 10:10:33.645387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.178 [2024-07-15 10:10:33.645398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.178 [2024-07-15 10:10:33.649859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.178 [2024-07-15 10:10:33.649892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.178 [2024-07-15 10:10:33.649900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.178 [2024-07-15 10:10:33.653646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.178 [2024-07-15 10:10:33.653687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.178 [2024-07-15 10:10:33.653695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.178 [2024-07-15 10:10:33.657354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.178 [2024-07-15 10:10:33.657387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.178 [2024-07-15 10:10:33.657396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.178 [2024-07-15 10:10:33.659598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.178 [2024-07-15 10:10:33.659637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.178 [2024-07-15 10:10:33.659662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.178 [2024-07-15 10:10:33.664485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.178 [2024-07-15 10:10:33.664528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.178 [2024-07-15 10:10:33.664539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.178 [2024-07-15 10:10:33.668750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.178 [2024-07-15 10:10:33.668785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.178 [2024-07-15 10:10:33.668793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.178 [2024-07-15 10:10:33.672629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.178 [2024-07-15 10:10:33.672674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.178 [2024-07-15 10:10:33.672683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.178 [2024-07-15 10:10:33.675416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.178 [2024-07-15 10:10:33.675451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.178 [2024-07-15 10:10:33.675460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.178 [2024-07-15 10:10:33.678634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.178 [2024-07-15 10:10:33.678678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.178 [2024-07-15 10:10:33.678686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.178 [2024-07-15 10:10:33.682071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.178 [2024-07-15 10:10:33.682104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.178 [2024-07-15 10:10:33.682112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.178 [2024-07-15 10:10:33.684842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.178 [2024-07-15 10:10:33.684875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.178 [2024-07-15 10:10:33.684899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.178 [2024-07-15 10:10:33.687967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.178 [2024-07-15 10:10:33.688000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.178 [2024-07-15 10:10:33.688008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.178 [2024-07-15 10:10:33.690910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.178 [2024-07-15 10:10:33.690941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.178 [2024-07-15 10:10:33.690949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.178 [2024-07-15 10:10:33.694205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.178 [2024-07-15 10:10:33.694248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.178 [2024-07-15 10:10:33.694259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.178 [2024-07-15 10:10:33.697572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.178 [2024-07-15 10:10:33.697619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.178 [2024-07-15 10:10:33.697627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.178 [2024-07-15 10:10:33.701274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.178 [2024-07-15 10:10:33.701308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.178 [2024-07-15 10:10:33.701331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.178 [2024-07-15 10:10:33.704807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.178 [2024-07-15 10:10:33.704839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.178 [2024-07-15 10:10:33.704846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.178 [2024-07-15 10:10:33.708448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.178 [2024-07-15 10:10:33.708481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.178 [2024-07-15 10:10:33.708489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.178 [2024-07-15 10:10:33.711901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.178 [2024-07-15 10:10:33.711935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.178 [2024-07-15 10:10:33.711943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.178 [2024-07-15 10:10:33.715491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.178 [2024-07-15 10:10:33.715525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.178 [2024-07-15 10:10:33.715532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.178 [2024-07-15 10:10:33.718925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.178 [2024-07-15 10:10:33.718956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.178 [2024-07-15 10:10:33.718964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.178 [2024-07-15 10:10:33.721199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.178 [2024-07-15 10:10:33.721233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.178 [2024-07-15 10:10:33.721241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.178 [2024-07-15 10:10:33.724898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.178 [2024-07-15 10:10:33.724931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.178 [2024-07-15 10:10:33.724939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.178 [2024-07-15 10:10:33.728332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.178 [2024-07-15 10:10:33.728373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.178 [2024-07-15 10:10:33.728380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.178 [2024-07-15 10:10:33.732477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.178 [2024-07-15 10:10:33.732515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.178 [2024-07-15 10:10:33.732525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.178 [2024-07-15 10:10:33.736481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.178 [2024-07-15 10:10:33.736516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.179 [2024-07-15 10:10:33.736540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.179 [2024-07-15 10:10:33.741161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.179 [2024-07-15 10:10:33.741205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.179 [2024-07-15 10:10:33.741214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.179 [2024-07-15 10:10:33.745194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.179 [2024-07-15 10:10:33.745231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.179 [2024-07-15 10:10:33.745238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.179 [2024-07-15 10:10:33.749033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.179 [2024-07-15 10:10:33.749070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.179 [2024-07-15 10:10:33.749078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.179 [2024-07-15 10:10:33.752233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.179 [2024-07-15 10:10:33.752266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.179 [2024-07-15 10:10:33.752274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.179 [2024-07-15 10:10:33.755127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.179 [2024-07-15 10:10:33.755160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.179 [2024-07-15 10:10:33.755168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.448 [2024-07-15 10:10:33.759175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.448 [2024-07-15 10:10:33.759221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.448 [2024-07-15 10:10:33.759232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.448 [2024-07-15 10:10:33.763448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.448 [2024-07-15 10:10:33.763490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.448 [2024-07-15 10:10:33.763499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.448 [2024-07-15 10:10:33.766491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.448 [2024-07-15 10:10:33.766527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.448 [2024-07-15 10:10:33.766551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.448 [2024-07-15 10:10:33.770036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.448 [2024-07-15 10:10:33.770072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.448 [2024-07-15 10:10:33.770080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.448 [2024-07-15 10:10:33.774052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.448 [2024-07-15 10:10:33.774096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.448 [2024-07-15 10:10:33.774111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.448 [2024-07-15 10:10:33.777719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.448 [2024-07-15 10:10:33.777756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.448 [2024-07-15 10:10:33.777765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.448 [2024-07-15 10:10:33.780826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.448 [2024-07-15 10:10:33.780874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.448 [2024-07-15 10:10:33.780885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.448 [2024-07-15 10:10:33.785367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.448 [2024-07-15 10:10:33.785416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.448 [2024-07-15 10:10:33.785426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.448 [2024-07-15 10:10:33.789892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.448 [2024-07-15 10:10:33.789933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.448 [2024-07-15 10:10:33.789943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.448 [2024-07-15 10:10:33.793804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.448 [2024-07-15 10:10:33.793842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.448 [2024-07-15 10:10:33.793850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.448 [2024-07-15 10:10:33.797298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.448 [2024-07-15 10:10:33.797333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.448 [2024-07-15 10:10:33.797342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.448 [2024-07-15 10:10:33.801169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.448 [2024-07-15 10:10:33.801205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.448 [2024-07-15 10:10:33.801213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.448 [2024-07-15 10:10:33.804858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.448 [2024-07-15 10:10:33.804892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.448 [2024-07-15 10:10:33.804901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.448 [2024-07-15 10:10:33.808317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.448 [2024-07-15 10:10:33.808350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.448 [2024-07-15 10:10:33.808359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.448 [2024-07-15 10:10:33.810582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.448 [2024-07-15 10:10:33.810613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.448 [2024-07-15 10:10:33.810620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.448 [2024-07-15 10:10:33.814094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.449 [2024-07-15 10:10:33.814127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.449 [2024-07-15 10:10:33.814136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.449 [2024-07-15 10:10:33.817695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.449 [2024-07-15 10:10:33.817726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.449 [2024-07-15 10:10:33.817733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.449 [2024-07-15 10:10:33.820346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.449 [2024-07-15 10:10:33.820388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.449 [2024-07-15 10:10:33.820413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.449 [2024-07-15 10:10:33.823824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.449 [2024-07-15 10:10:33.823857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.449 [2024-07-15 10:10:33.823866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.449 [2024-07-15 10:10:33.826796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.449 [2024-07-15 10:10:33.826830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.449 [2024-07-15 10:10:33.826838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.449 [2024-07-15 10:10:33.830059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.449 [2024-07-15 10:10:33.830090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.449 [2024-07-15 10:10:33.830098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.449 [2024-07-15 10:10:33.832436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.449 [2024-07-15 10:10:33.832468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.449 [2024-07-15 10:10:33.832476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.449 [2024-07-15 10:10:33.836022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.449 [2024-07-15 10:10:33.836054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.449 [2024-07-15 10:10:33.836062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.449 [2024-07-15 10:10:33.839206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.449 [2024-07-15 10:10:33.839238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.449 [2024-07-15 10:10:33.839245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.449 [2024-07-15 10:10:33.843114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.449 [2024-07-15 10:10:33.843148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.449 [2024-07-15 10:10:33.843156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.449 [2024-07-15 10:10:33.846610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.449 [2024-07-15 10:10:33.846642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.449 [2024-07-15 10:10:33.846650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.449 [2024-07-15 10:10:33.848763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.449 [2024-07-15 10:10:33.848796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.449 [2024-07-15 10:10:33.848804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.449 [2024-07-15 10:10:33.852007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.449 [2024-07-15 10:10:33.852040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.449 [2024-07-15 10:10:33.852047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.449 [2024-07-15 10:10:33.855751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.449 [2024-07-15 10:10:33.855783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.449 [2024-07-15 10:10:33.855791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.449 [2024-07-15 10:10:33.859232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.449 [2024-07-15 10:10:33.859264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.449 [2024-07-15 10:10:33.859272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.449 [2024-07-15 10:10:33.861614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.449 [2024-07-15 10:10:33.861647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.449 [2024-07-15 10:10:33.861655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.449 [2024-07-15 10:10:33.865398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.449 [2024-07-15 10:10:33.865433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.449 [2024-07-15 10:10:33.865440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.449 [2024-07-15 10:10:33.869176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.449 [2024-07-15 10:10:33.869210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.449 [2024-07-15 10:10:33.869217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.449 [2024-07-15 10:10:33.872899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.449 [2024-07-15 10:10:33.872934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.449 [2024-07-15 10:10:33.872941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.449 [2024-07-15 10:10:33.875320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.449 [2024-07-15 10:10:33.875350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.449 [2024-07-15 10:10:33.875357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.449 [2024-07-15 10:10:33.878510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.449 [2024-07-15 10:10:33.878544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.449 [2024-07-15 10:10:33.878553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.449 [2024-07-15 10:10:33.882066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.449 [2024-07-15 10:10:33.882098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.449 [2024-07-15 10:10:33.882106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.449 [2024-07-15 10:10:33.884570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.449 [2024-07-15 10:10:33.884602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.449 [2024-07-15 10:10:33.884610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.449 [2024-07-15 10:10:33.887396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.449 [2024-07-15 10:10:33.887428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.449 [2024-07-15 10:10:33.887435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.449 [2024-07-15 10:10:33.891121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.449 [2024-07-15 10:10:33.891154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.449 [2024-07-15 10:10:33.891162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.449 [2024-07-15 10:10:33.893549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.449 [2024-07-15 10:10:33.893580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.449 [2024-07-15 10:10:33.893588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.449 [2024-07-15 10:10:33.896603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.449 [2024-07-15 10:10:33.896639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.449 [2024-07-15 10:10:33.896648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.449 [2024-07-15 10:10:33.900363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.449 [2024-07-15 10:10:33.900408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.449 [2024-07-15 10:10:33.900416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.449 [2024-07-15 10:10:33.903410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.450 [2024-07-15 10:10:33.903445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.450 [2024-07-15 10:10:33.903453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.450 [2024-07-15 10:10:33.906569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.450 [2024-07-15 10:10:33.906607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.450 [2024-07-15 10:10:33.906615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.450 [2024-07-15 10:10:33.910386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.450 [2024-07-15 10:10:33.910422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.450 [2024-07-15 10:10:33.910435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.450 [2024-07-15 10:10:33.913372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.450 [2024-07-15 10:10:33.913407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.450 [2024-07-15 10:10:33.913416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.450 [2024-07-15 10:10:33.916427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.450 [2024-07-15 10:10:33.916462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.450 [2024-07-15 10:10:33.916470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.450 [2024-07-15 10:10:33.919347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.450 [2024-07-15 10:10:33.919382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.450 [2024-07-15 10:10:33.919390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.450 [2024-07-15 10:10:33.922691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.450 [2024-07-15 10:10:33.922726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.450 [2024-07-15 10:10:33.922734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.450 [2024-07-15 10:10:33.925248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.450 [2024-07-15 10:10:33.925283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.450 [2024-07-15 10:10:33.925291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.450 [2024-07-15 10:10:33.928653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.450 [2024-07-15 10:10:33.928691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.450 [2024-07-15 10:10:33.928699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.450 [2024-07-15 10:10:33.931746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.450 [2024-07-15 10:10:33.931783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.450 [2024-07-15 10:10:33.931791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.450 [2024-07-15 10:10:33.935136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.450 [2024-07-15 10:10:33.935174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.450 [2024-07-15 10:10:33.935182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.450 [2024-07-15 10:10:33.937956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.450 [2024-07-15 10:10:33.937993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.450 [2024-07-15 10:10:33.938001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.450 [2024-07-15 10:10:33.940992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.450 [2024-07-15 10:10:33.941026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.450 [2024-07-15 10:10:33.941034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.450 [2024-07-15 10:10:33.944337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.450 [2024-07-15 10:10:33.944380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.450 [2024-07-15 10:10:33.944388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.450 [2024-07-15 10:10:33.947177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.450 [2024-07-15 10:10:33.947212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.450 [2024-07-15 10:10:33.947221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.450 [2024-07-15 10:10:33.950973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.450 [2024-07-15 10:10:33.951021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.450 [2024-07-15 10:10:33.951029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.450 [2024-07-15 10:10:33.955164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.450 [2024-07-15 10:10:33.955203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.450 [2024-07-15 10:10:33.955212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.450 [2024-07-15 10:10:33.959151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.450 [2024-07-15 10:10:33.959190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.450 [2024-07-15 10:10:33.959199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.450 [2024-07-15 10:10:33.961622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.450 [2024-07-15 10:10:33.961670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.450 [2024-07-15 10:10:33.961680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.450 [2024-07-15 10:10:33.965984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.450 [2024-07-15 10:10:33.966021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.450 [2024-07-15 10:10:33.966029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.450 [2024-07-15 10:10:33.970156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.450 [2024-07-15 10:10:33.970194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.450 [2024-07-15 10:10:33.970203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.450 [2024-07-15 10:10:33.972430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.450 [2024-07-15 10:10:33.972464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.450 [2024-07-15 10:10:33.972472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.450 [2024-07-15 10:10:33.976711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.450 [2024-07-15 10:10:33.976745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.450 [2024-07-15 10:10:33.976753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.450 [2024-07-15 10:10:33.980686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.450 [2024-07-15 10:10:33.980718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.450 [2024-07-15 10:10:33.980726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.450 [2024-07-15 10:10:33.983476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.450 [2024-07-15 10:10:33.983509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.450 [2024-07-15 10:10:33.983517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.450 [2024-07-15 10:10:33.986838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.450 [2024-07-15 10:10:33.986873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.450 [2024-07-15 10:10:33.986881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.450 [2024-07-15 10:10:33.990770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.450 [2024-07-15 10:10:33.990805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.450 [2024-07-15 10:10:33.990814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.450 [2024-07-15 10:10:33.993417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.450 [2024-07-15 10:10:33.993451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.451 [2024-07-15 10:10:33.993460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.451 [2024-07-15 10:10:33.996573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.451 [2024-07-15 10:10:33.996606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.451 [2024-07-15 10:10:33.996614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.451 [2024-07-15 10:10:33.999929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.451 [2024-07-15 10:10:33.999962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.451 [2024-07-15 10:10:33.999969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.451 [2024-07-15 10:10:34.003690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.451 [2024-07-15 10:10:34.003722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.451 [2024-07-15 10:10:34.003730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.451 [2024-07-15 10:10:34.007091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.451 [2024-07-15 10:10:34.007126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.451 [2024-07-15 10:10:34.007133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.451 [2024-07-15 10:10:34.009990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.451 [2024-07-15 10:10:34.010023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.451 [2024-07-15 10:10:34.010031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.451 [2024-07-15 10:10:34.012764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.451 [2024-07-15 10:10:34.012796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.451 [2024-07-15 10:10:34.012803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.451 [2024-07-15 10:10:34.015833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.451 [2024-07-15 10:10:34.015867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.451 [2024-07-15 10:10:34.015875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.451 [2024-07-15 10:10:34.018936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.451 [2024-07-15 10:10:34.018970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.451 [2024-07-15 10:10:34.018978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.451 [2024-07-15 10:10:34.021798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.451 [2024-07-15 10:10:34.021833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.451 [2024-07-15 10:10:34.021841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.451 [2024-07-15 10:10:34.024834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.451 [2024-07-15 10:10:34.024868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.451 [2024-07-15 10:10:34.024876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.451 [2024-07-15 10:10:34.027677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.451 [2024-07-15 10:10:34.027725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.451 [2024-07-15 10:10:34.027734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.713 [2024-07-15 10:10:34.031391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.713 [2024-07-15 10:10:34.031429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.713 [2024-07-15 10:10:34.031437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.713 [2024-07-15 10:10:34.033716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.713 [2024-07-15 10:10:34.033748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.713 [2024-07-15 10:10:34.033755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.713 [2024-07-15 10:10:34.037233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.713 [2024-07-15 10:10:34.037271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.713 [2024-07-15 10:10:34.037280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.713 [2024-07-15 10:10:34.041294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.713 [2024-07-15 10:10:34.041331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.713 [2024-07-15 10:10:34.041339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.713 [2024-07-15 10:10:34.045311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.714 [2024-07-15 10:10:34.045349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.714 [2024-07-15 10:10:34.045357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.714 [2024-07-15 10:10:34.049136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.714 [2024-07-15 10:10:34.049173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.714 [2024-07-15 10:10:34.049180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.714 [2024-07-15 10:10:34.051481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.714 [2024-07-15 10:10:34.051514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.714 [2024-07-15 10:10:34.051521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.714 [2024-07-15 10:10:34.055298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.714 [2024-07-15 10:10:34.055346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.714 [2024-07-15 10:10:34.055354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.714 [2024-07-15 10:10:34.059162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.714 [2024-07-15 10:10:34.059200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.714 [2024-07-15 10:10:34.059208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.714 [2024-07-15 10:10:34.062855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.714 [2024-07-15 10:10:34.062891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.714 [2024-07-15 10:10:34.062899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.714 [2024-07-15 10:10:34.065480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.714 [2024-07-15 10:10:34.065513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.714 [2024-07-15 10:10:34.065521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.714 [2024-07-15 10:10:34.068638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.714 [2024-07-15 10:10:34.068680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.714 [2024-07-15 10:10:34.068688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.714 [2024-07-15 10:10:34.072411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.714 [2024-07-15 10:10:34.072446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.714 [2024-07-15 10:10:34.072454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.714 [2024-07-15 10:10:34.076125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.714 [2024-07-15 10:10:34.076158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.714 [2024-07-15 10:10:34.076166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.714 [2024-07-15 10:10:34.079692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.714 [2024-07-15 10:10:34.079726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.714 [2024-07-15 10:10:34.079733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.714 [2024-07-15 10:10:34.083177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.714 [2024-07-15 10:10:34.083215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.714 [2024-07-15 10:10:34.083223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.714 [2024-07-15 10:10:34.085945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.714 [2024-07-15 10:10:34.085980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.714 [2024-07-15 10:10:34.085988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.714 [2024-07-15 10:10:34.089455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.714 [2024-07-15 10:10:34.089492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.714 [2024-07-15 10:10:34.089500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.714 [2024-07-15 10:10:34.093103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.714 [2024-07-15 10:10:34.093142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.714 [2024-07-15 10:10:34.093150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.714 [2024-07-15 10:10:34.096456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.714 [2024-07-15 10:10:34.096496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.714 [2024-07-15 10:10:34.096504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.714 [2024-07-15 10:10:34.099924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.714 [2024-07-15 10:10:34.099963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.714 [2024-07-15 10:10:34.099971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.714 [2024-07-15 10:10:34.103579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.714 [2024-07-15 10:10:34.103620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.714 [2024-07-15 10:10:34.103628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.714 [2024-07-15 10:10:34.106569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.714 [2024-07-15 10:10:34.106607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.714 [2024-07-15 10:10:34.106615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.714 [2024-07-15 10:10:34.109327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.714 [2024-07-15 10:10:34.109365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.714 [2024-07-15 10:10:34.109373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.714 [2024-07-15 10:10:34.113114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.714 [2024-07-15 10:10:34.113152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.714 [2024-07-15 10:10:34.113160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.714 [2024-07-15 10:10:34.115478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.714 [2024-07-15 10:10:34.115513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.714 [2024-07-15 10:10:34.115520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.714 [2024-07-15 10:10:34.118973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.714 [2024-07-15 10:10:34.119012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.714 [2024-07-15 10:10:34.119020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.714 [2024-07-15 10:10:34.122651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.714 [2024-07-15 10:10:34.122697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.714 [2024-07-15 10:10:34.122704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.714 [2024-07-15 10:10:34.126195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.714 [2024-07-15 10:10:34.126234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.714 [2024-07-15 10:10:34.126243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.714 [2024-07-15 10:10:34.128450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.714 [2024-07-15 10:10:34.128481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.714 [2024-07-15 10:10:34.128489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.714 [2024-07-15 10:10:34.132235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.714 [2024-07-15 10:10:34.132271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.714 [2024-07-15 10:10:34.132278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.714 [2024-07-15 10:10:34.135310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.714 [2024-07-15 10:10:34.135345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.715 [2024-07-15 10:10:34.135354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.715 [2024-07-15 10:10:34.138476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.715 [2024-07-15 10:10:34.138512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.715 [2024-07-15 10:10:34.138521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.715 [2024-07-15 10:10:34.142121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.715 [2024-07-15 10:10:34.142156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.715 [2024-07-15 10:10:34.142164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.715 [2024-07-15 10:10:34.144420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.715 [2024-07-15 10:10:34.144457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.715 [2024-07-15 10:10:34.144465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.715 [2024-07-15 10:10:34.147632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.715 [2024-07-15 10:10:34.147678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.715 [2024-07-15 10:10:34.147686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.715 [2024-07-15 10:10:34.151246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.715 [2024-07-15 10:10:34.151281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.715 [2024-07-15 10:10:34.151288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.715 [2024-07-15 10:10:34.153942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.715 [2024-07-15 10:10:34.153974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.715 [2024-07-15 10:10:34.153983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.715 [2024-07-15 10:10:34.157346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.715 [2024-07-15 10:10:34.157385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.715 [2024-07-15 10:10:34.157394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.715 [2024-07-15 10:10:34.160075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.715 [2024-07-15 10:10:34.160108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.715 [2024-07-15 10:10:34.160116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.715 [2024-07-15 10:10:34.163005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.715 [2024-07-15 10:10:34.163039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.715 [2024-07-15 10:10:34.163047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.715 [2024-07-15 10:10:34.166249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.715 [2024-07-15 10:10:34.166284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.715 [2024-07-15 10:10:34.166292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.715 [2024-07-15 10:10:34.169618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.715 [2024-07-15 10:10:34.169655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.715 [2024-07-15 10:10:34.169674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.715 [2024-07-15 10:10:34.172638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.715 [2024-07-15 10:10:34.172682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.715 [2024-07-15 10:10:34.172690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.715 [2024-07-15 10:10:34.175991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.715 [2024-07-15 10:10:34.176027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.715 [2024-07-15 10:10:34.176035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.715 [2024-07-15 10:10:34.178806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.715 [2024-07-15 10:10:34.178842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.715 [2024-07-15 10:10:34.178849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.715 [2024-07-15 10:10:34.181843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.715 [2024-07-15 10:10:34.181894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.715 [2024-07-15 10:10:34.181902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.715 [2024-07-15 10:10:34.184755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.715 [2024-07-15 10:10:34.184789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.715 [2024-07-15 10:10:34.184796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.715 [2024-07-15 10:10:34.187531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.715 [2024-07-15 10:10:34.187562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.715 [2024-07-15 10:10:34.187569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.715 [2024-07-15 10:10:34.190335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.715 [2024-07-15 10:10:34.190367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.715 [2024-07-15 10:10:34.190374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.715 [2024-07-15 10:10:34.193588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.715 [2024-07-15 10:10:34.193623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.715 [2024-07-15 10:10:34.193630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.715 [2024-07-15 10:10:34.196180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.715 [2024-07-15 10:10:34.196214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.715 [2024-07-15 10:10:34.196221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.715 [2024-07-15 10:10:34.198894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.715 [2024-07-15 10:10:34.198927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.715 [2024-07-15 10:10:34.198934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.715 [2024-07-15 10:10:34.202128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.715 [2024-07-15 10:10:34.202162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.715 [2024-07-15 10:10:34.202170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.715 [2024-07-15 10:10:34.204589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.715 [2024-07-15 10:10:34.204622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.715 [2024-07-15 10:10:34.204630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.715 [2024-07-15 10:10:34.208157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.715 [2024-07-15 10:10:34.208190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.715 [2024-07-15 10:10:34.208198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.715 [2024-07-15 10:10:34.211903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.715 [2024-07-15 10:10:34.211937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.715 [2024-07-15 10:10:34.211944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.715 [2024-07-15 10:10:34.214022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.715 [2024-07-15 10:10:34.214054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.715 [2024-07-15 10:10:34.214062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.715 [2024-07-15 10:10:34.217612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.715 [2024-07-15 10:10:34.217645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.715 [2024-07-15 10:10:34.217652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.715 [2024-07-15 10:10:34.221394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.715 [2024-07-15 10:10:34.221428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.715 [2024-07-15 10:10:34.221436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.715 [2024-07-15 10:10:34.224588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.716 [2024-07-15 10:10:34.224622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.716 [2024-07-15 10:10:34.224629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.716 [2024-07-15 10:10:34.228325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.716 [2024-07-15 10:10:34.228356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.716 [2024-07-15 10:10:34.228364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.716 [2024-07-15 10:10:34.231828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.716 [2024-07-15 10:10:34.231860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.716 [2024-07-15 10:10:34.231867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.716 [2024-07-15 10:10:34.235823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.716 [2024-07-15 10:10:34.235857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.716 [2024-07-15 10:10:34.235864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.716 [2024-07-15 10:10:34.239226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.716 [2024-07-15 10:10:34.239261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.716 [2024-07-15 10:10:34.239268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.716 [2024-07-15 10:10:34.242962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.716 [2024-07-15 10:10:34.242995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.716 [2024-07-15 10:10:34.243003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.716 [2024-07-15 10:10:34.246414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.716 [2024-07-15 10:10:34.246449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.716 [2024-07-15 10:10:34.246457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.716 [2024-07-15 10:10:34.250087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.716 [2024-07-15 10:10:34.250120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.716 [2024-07-15 10:10:34.250128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.716 [2024-07-15 10:10:34.252475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.716 [2024-07-15 10:10:34.252507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.716 [2024-07-15 10:10:34.252514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.716 [2024-07-15 10:10:34.256247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.716 [2024-07-15 10:10:34.256291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.716 [2024-07-15 10:10:34.256299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.716 [2024-07-15 10:10:34.259962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.716 [2024-07-15 10:10:34.259996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.716 [2024-07-15 10:10:34.260004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.716 [2024-07-15 10:10:34.262606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.716 [2024-07-15 10:10:34.262639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.716 [2024-07-15 10:10:34.262646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.716 [2024-07-15 10:10:34.265105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.716 [2024-07-15 10:10:34.265137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.716 [2024-07-15 10:10:34.265145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.716 [2024-07-15 10:10:34.268359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.716 [2024-07-15 10:10:34.268400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.716 [2024-07-15 10:10:34.268408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.716 [2024-07-15 10:10:34.271950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.716 [2024-07-15 10:10:34.271981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.716 [2024-07-15 10:10:34.271989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.716 [2024-07-15 10:10:34.275450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.716 [2024-07-15 10:10:34.275482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.716 [2024-07-15 10:10:34.275490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.716 [2024-07-15 10:10:34.278853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.716 [2024-07-15 10:10:34.278886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.716 [2024-07-15 10:10:34.278893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.716 [2024-07-15 10:10:34.282270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.716 [2024-07-15 10:10:34.282304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.716 [2024-07-15 10:10:34.282311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.716 [2024-07-15 10:10:34.285867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.716 [2024-07-15 10:10:34.285903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.716 [2024-07-15 10:10:34.285911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.716 [2024-07-15 10:10:34.289359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.716 [2024-07-15 10:10:34.289394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.716 [2024-07-15 10:10:34.289401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.716 [2024-07-15 10:10:34.292001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.716 [2024-07-15 10:10:34.292032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.716 [2024-07-15 10:10:34.292040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.716 [2024-07-15 10:10:34.295113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.716 [2024-07-15 10:10:34.295148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.716 [2024-07-15 10:10:34.295156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.978 [2024-07-15 10:10:34.298900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.978 [2024-07-15 10:10:34.298938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.978 [2024-07-15 10:10:34.298946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.978 [2024-07-15 10:10:34.302487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.978 [2024-07-15 10:10:34.302523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.978 [2024-07-15 10:10:34.302531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.978 [2024-07-15 10:10:34.306128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.978 [2024-07-15 10:10:34.306161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.978 [2024-07-15 10:10:34.306168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.978 [2024-07-15 10:10:34.309413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.978 [2024-07-15 10:10:34.309448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.978 [2024-07-15 10:10:34.309456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.978 [2024-07-15 10:10:34.311518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.978 [2024-07-15 10:10:34.311550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.978 [2024-07-15 10:10:34.311557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.978 [2024-07-15 10:10:34.315094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.978 [2024-07-15 10:10:34.315130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.978 [2024-07-15 10:10:34.315138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.978 [2024-07-15 10:10:34.317605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.978 [2024-07-15 10:10:34.317641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.978 [2024-07-15 10:10:34.317650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.978 [2024-07-15 10:10:34.321090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.978 [2024-07-15 10:10:34.321127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.978 [2024-07-15 10:10:34.321135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.978 [2024-07-15 10:10:34.324365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.978 [2024-07-15 10:10:34.324438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.978 [2024-07-15 10:10:34.324445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.978 [2024-07-15 10:10:34.327323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.978 [2024-07-15 10:10:34.327360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.978 [2024-07-15 10:10:34.327368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.978 [2024-07-15 10:10:34.330207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.978 [2024-07-15 10:10:34.330236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.978 [2024-07-15 10:10:34.330244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.978 [2024-07-15 10:10:34.333919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.978 [2024-07-15 10:10:34.333952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.978 [2024-07-15 10:10:34.333961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.978 [2024-07-15 10:10:34.337247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.978 [2024-07-15 10:10:34.337283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.978 [2024-07-15 10:10:34.337292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.978 [2024-07-15 10:10:34.340297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.978 [2024-07-15 10:10:34.340331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.978 [2024-07-15 10:10:34.340341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.978 [2024-07-15 10:10:34.343695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.978 [2024-07-15 10:10:34.343731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.978 [2024-07-15 10:10:34.343740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.978 [2024-07-15 10:10:34.347068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.978 [2024-07-15 10:10:34.347102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.978 [2024-07-15 10:10:34.347111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.978 [2024-07-15 10:10:34.350746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.978 [2024-07-15 10:10:34.350778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.978 [2024-07-15 10:10:34.350786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.978 [2024-07-15 10:10:34.355142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.978 [2024-07-15 10:10:34.355176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.978 [2024-07-15 10:10:34.355184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.978 [2024-07-15 10:10:34.358737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.978 [2024-07-15 10:10:34.358771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.978 [2024-07-15 10:10:34.358779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.978 [2024-07-15 10:10:34.362093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.978 [2024-07-15 10:10:34.362129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.978 [2024-07-15 10:10:34.362137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.978 [2024-07-15 10:10:34.365526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.978 [2024-07-15 10:10:34.365565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.978 [2024-07-15 10:10:34.365574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.978 [2024-07-15 10:10:34.368503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.978 [2024-07-15 10:10:34.368537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.978 [2024-07-15 10:10:34.368546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.978 [2024-07-15 10:10:34.372477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.979 [2024-07-15 10:10:34.372510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.979 [2024-07-15 10:10:34.372517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.979 [2024-07-15 10:10:34.375380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.979 [2024-07-15 10:10:34.375414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.979 [2024-07-15 10:10:34.375421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.979 [2024-07-15 10:10:34.378516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.979 [2024-07-15 10:10:34.378549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.979 [2024-07-15 10:10:34.378557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.979 [2024-07-15 10:10:34.382005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.979 [2024-07-15 10:10:34.382040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.979 [2024-07-15 10:10:34.382048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.979 [2024-07-15 10:10:34.385430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.979 [2024-07-15 10:10:34.385466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.979 [2024-07-15 10:10:34.385473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.979 [2024-07-15 10:10:34.388215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.979 [2024-07-15 10:10:34.388251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.979 [2024-07-15 10:10:34.388259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.979 [2024-07-15 10:10:34.391754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.979 [2024-07-15 10:10:34.391788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.979 [2024-07-15 10:10:34.391796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.979 [2024-07-15 10:10:34.394767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.979 [2024-07-15 10:10:34.394801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.979 [2024-07-15 10:10:34.394808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.979 [2024-07-15 10:10:34.398313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.979 [2024-07-15 10:10:34.398348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.979 [2024-07-15 10:10:34.398356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.979 [2024-07-15 10:10:34.401425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.979 [2024-07-15 10:10:34.401459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.979 [2024-07-15 10:10:34.401466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.979 [2024-07-15 10:10:34.405147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.979 [2024-07-15 10:10:34.405181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.979 [2024-07-15 10:10:34.405190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.979 [2024-07-15 10:10:34.408221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.979 [2024-07-15 10:10:34.408257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.979 [2024-07-15 10:10:34.408264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.979 [2024-07-15 10:10:34.411755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.979 [2024-07-15 10:10:34.411790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.979 [2024-07-15 10:10:34.411798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.979 [2024-07-15 10:10:34.415502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.979 [2024-07-15 10:10:34.415536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.979 [2024-07-15 10:10:34.415544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.979 [2024-07-15 10:10:34.418597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.979 [2024-07-15 10:10:34.418632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.979 [2024-07-15 10:10:34.418640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.979 [2024-07-15 10:10:34.422151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.979 [2024-07-15 10:10:34.422198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.979 [2024-07-15 10:10:34.422206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.979 [2024-07-15 10:10:34.425193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.979 [2024-07-15 10:10:34.425227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.979 [2024-07-15 10:10:34.425235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.979 [2024-07-15 10:10:34.428097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.979 [2024-07-15 10:10:34.428130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.979 [2024-07-15 10:10:34.428137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.979 [2024-07-15 10:10:34.431329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.979 [2024-07-15 10:10:34.431363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.979 [2024-07-15 10:10:34.431371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.979 [2024-07-15 10:10:34.434774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.979 [2024-07-15 10:10:34.434808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.979 [2024-07-15 10:10:34.434816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.979 [2024-07-15 10:10:34.438510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.979 [2024-07-15 10:10:34.438546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.979 [2024-07-15 10:10:34.438554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.979 [2024-07-15 10:10:34.441310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.979 [2024-07-15 10:10:34.441347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.979 [2024-07-15 10:10:34.441355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.979 [2024-07-15 10:10:34.445165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.979 [2024-07-15 10:10:34.445208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.979 [2024-07-15 10:10:34.445217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.979 [2024-07-15 10:10:34.449136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.979 [2024-07-15 10:10:34.449179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.979 [2024-07-15 10:10:34.449187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.979 [2024-07-15 10:10:34.451621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.979 [2024-07-15 10:10:34.451671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.979 [2024-07-15 10:10:34.451679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.979 [2024-07-15 10:10:34.454915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.979 [2024-07-15 10:10:34.454953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.979 [2024-07-15 10:10:34.454961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.979 [2024-07-15 10:10:34.458457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.979 [2024-07-15 10:10:34.458497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.979 [2024-07-15 10:10:34.458504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.979 [2024-07-15 10:10:34.460838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.979 [2024-07-15 10:10:34.460874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.979 [2024-07-15 10:10:34.460882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.979 [2024-07-15 10:10:34.464918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.979 [2024-07-15 10:10:34.464956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.979 [2024-07-15 10:10:34.464965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.979 [2024-07-15 10:10:34.467517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.979 [2024-07-15 10:10:34.467553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.980 [2024-07-15 10:10:34.467560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.980 [2024-07-15 10:10:34.471476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.980 [2024-07-15 10:10:34.471516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.980 [2024-07-15 10:10:34.471524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.980 [2024-07-15 10:10:34.475593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.980 [2024-07-15 10:10:34.475634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.980 [2024-07-15 10:10:34.475641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.980 [2024-07-15 10:10:34.479439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.980 [2024-07-15 10:10:34.479477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.980 [2024-07-15 10:10:34.479485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.980 [2024-07-15 10:10:34.481858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.980 [2024-07-15 10:10:34.481891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.980 [2024-07-15 10:10:34.481912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.980 [2024-07-15 10:10:34.485712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.980 [2024-07-15 10:10:34.485746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.980 [2024-07-15 10:10:34.485754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.980 [2024-07-15 10:10:34.488457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.980 [2024-07-15 10:10:34.488488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.980 [2024-07-15 10:10:34.488497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.980 [2024-07-15 10:10:34.491534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.980 [2024-07-15 10:10:34.491567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.980 [2024-07-15 10:10:34.491575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.980 [2024-07-15 10:10:34.494255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.980 [2024-07-15 10:10:34.494287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.980 [2024-07-15 10:10:34.494295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.980 [2024-07-15 10:10:34.497523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.980 [2024-07-15 10:10:34.497554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.980 [2024-07-15 10:10:34.497562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.980 [2024-07-15 10:10:34.500171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.980 [2024-07-15 10:10:34.500202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.980 [2024-07-15 10:10:34.500210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.980 [2024-07-15 10:10:34.503334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.980 [2024-07-15 10:10:34.503364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.980 [2024-07-15 10:10:34.503373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.980 [2024-07-15 10:10:34.507346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.980 [2024-07-15 10:10:34.507379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.980 [2024-07-15 10:10:34.507386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.980 [2024-07-15 10:10:34.511057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.980 [2024-07-15 10:10:34.511092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.980 [2024-07-15 10:10:34.511101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.980 [2024-07-15 10:10:34.514680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.980 [2024-07-15 10:10:34.514705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.980 [2024-07-15 10:10:34.514713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.980 [2024-07-15 10:10:34.517668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.980 [2024-07-15 10:10:34.517695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.980 [2024-07-15 10:10:34.517703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.980 [2024-07-15 10:10:34.520435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.980 [2024-07-15 10:10:34.520467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.980 [2024-07-15 10:10:34.520476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.980 [2024-07-15 10:10:34.524476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.980 [2024-07-15 10:10:34.524511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.980 [2024-07-15 10:10:34.524518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.980 [2024-07-15 10:10:34.528770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.980 [2024-07-15 10:10:34.528807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.980 [2024-07-15 10:10:34.528817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.980 [2024-07-15 10:10:34.531469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.980 [2024-07-15 10:10:34.531499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.980 [2024-07-15 10:10:34.531507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.980 [2024-07-15 10:10:34.535229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.980 [2024-07-15 10:10:34.535263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.980 [2024-07-15 10:10:34.535272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.980 [2024-07-15 10:10:34.538990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.980 [2024-07-15 10:10:34.539024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.980 [2024-07-15 10:10:34.539032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.980 [2024-07-15 10:10:34.542545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.980 [2024-07-15 10:10:34.542576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.980 [2024-07-15 10:10:34.542583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.980 [2024-07-15 10:10:34.545942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.980 [2024-07-15 10:10:34.545974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.980 [2024-07-15 10:10:34.545982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.980 [2024-07-15 10:10:34.549699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.980 [2024-07-15 10:10:34.549729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.980 [2024-07-15 10:10:34.549737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.980 [2024-07-15 10:10:34.552489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.980 [2024-07-15 10:10:34.552522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.980 [2024-07-15 10:10:34.552530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.980 [2024-07-15 10:10:34.555767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.980 [2024-07-15 10:10:34.555795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.980 [2024-07-15 10:10:34.555803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.980 [2024-07-15 10:10:34.559128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:20.980 [2024-07-15 10:10:34.559160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.980 [2024-07-15 10:10:34.559168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.242 [2024-07-15 10:10:34.562401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.242 [2024-07-15 10:10:34.562432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.242 [2024-07-15 10:10:34.562440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.242 [2024-07-15 10:10:34.566253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.242 [2024-07-15 10:10:34.566290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.242 [2024-07-15 10:10:34.566299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.242 [2024-07-15 10:10:34.570042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.242 [2024-07-15 10:10:34.570077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.242 [2024-07-15 10:10:34.570084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.242 [2024-07-15 10:10:34.573991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.242 [2024-07-15 10:10:34.574025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.242 [2024-07-15 10:10:34.574033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.242 [2024-07-15 10:10:34.576631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.242 [2024-07-15 10:10:34.576669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.242 [2024-07-15 10:10:34.576678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.242 [2024-07-15 10:10:34.579938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.242 [2024-07-15 10:10:34.579970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.242 [2024-07-15 10:10:34.579978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.242 [2024-07-15 10:10:34.583761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.242 [2024-07-15 10:10:34.583792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.242 [2024-07-15 10:10:34.583801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.242 [2024-07-15 10:10:34.587741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.242 [2024-07-15 10:10:34.587770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.242 [2024-07-15 10:10:34.587778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.242 [2024-07-15 10:10:34.591412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.242 [2024-07-15 10:10:34.591443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.242 [2024-07-15 10:10:34.591451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.242 [2024-07-15 10:10:34.594826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.242 [2024-07-15 10:10:34.594856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.242 [2024-07-15 10:10:34.594864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.242 [2024-07-15 10:10:34.597574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.242 [2024-07-15 10:10:34.597607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.242 [2024-07-15 10:10:34.597615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.242 [2024-07-15 10:10:34.600872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.242 [2024-07-15 10:10:34.600906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.242 [2024-07-15 10:10:34.600915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.242 [2024-07-15 10:10:34.604023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.242 [2024-07-15 10:10:34.604052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.242 [2024-07-15 10:10:34.604059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.242 [2024-07-15 10:10:34.606613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.242 [2024-07-15 10:10:34.606644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.242 [2024-07-15 10:10:34.606652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.242 [2024-07-15 10:10:34.610023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.242 [2024-07-15 10:10:34.610056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.242 [2024-07-15 10:10:34.610065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.242 [2024-07-15 10:10:34.613284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.242 [2024-07-15 10:10:34.613315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.242 [2024-07-15 10:10:34.613323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.242 [2024-07-15 10:10:34.615995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.243 [2024-07-15 10:10:34.616023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.243 [2024-07-15 10:10:34.616031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.243 [2024-07-15 10:10:34.619255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.243 [2024-07-15 10:10:34.619284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.243 [2024-07-15 10:10:34.619292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.243 [2024-07-15 10:10:34.622186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.243 [2024-07-15 10:10:34.622216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.243 [2024-07-15 10:10:34.622224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.243 [2024-07-15 10:10:34.625643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.243 [2024-07-15 10:10:34.625682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.243 [2024-07-15 10:10:34.625689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.243 [2024-07-15 10:10:34.628673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.243 [2024-07-15 10:10:34.628700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.243 [2024-07-15 10:10:34.628708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.243 [2024-07-15 10:10:34.631385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.243 [2024-07-15 10:10:34.631418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.243 [2024-07-15 10:10:34.631426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.243 [2024-07-15 10:10:34.635100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.243 [2024-07-15 10:10:34.635133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.243 [2024-07-15 10:10:34.635141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.243 [2024-07-15 10:10:34.639128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.243 [2024-07-15 10:10:34.639162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.243 [2024-07-15 10:10:34.639170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.243 [2024-07-15 10:10:34.641828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.243 [2024-07-15 10:10:34.641855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.243 [2024-07-15 10:10:34.641862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.243 [2024-07-15 10:10:34.645032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.243 [2024-07-15 10:10:34.645061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.243 [2024-07-15 10:10:34.645070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.243 [2024-07-15 10:10:34.648625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.243 [2024-07-15 10:10:34.648655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.243 [2024-07-15 10:10:34.648674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.243 [2024-07-15 10:10:34.651915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.243 [2024-07-15 10:10:34.651943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.243 [2024-07-15 10:10:34.651950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.243 [2024-07-15 10:10:34.655878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.243 [2024-07-15 10:10:34.655908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.243 [2024-07-15 10:10:34.655915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.243 [2024-07-15 10:10:34.658484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.243 [2024-07-15 10:10:34.658512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.243 [2024-07-15 10:10:34.658519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.243 [2024-07-15 10:10:34.661618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.243 [2024-07-15 10:10:34.661667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.243 [2024-07-15 10:10:34.661675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.243 [2024-07-15 10:10:34.665367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.243 [2024-07-15 10:10:34.665396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.243 [2024-07-15 10:10:34.665403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.243 [2024-07-15 10:10:34.668989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.243 [2024-07-15 10:10:34.669020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.243 [2024-07-15 10:10:34.669028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.243 [2024-07-15 10:10:34.671495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.243 [2024-07-15 10:10:34.671523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.243 [2024-07-15 10:10:34.671530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.243 [2024-07-15 10:10:34.674366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.243 [2024-07-15 10:10:34.674395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.243 [2024-07-15 10:10:34.674402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.243 [2024-07-15 10:10:34.678241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.243 [2024-07-15 10:10:34.678271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.243 [2024-07-15 10:10:34.678278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.243 [2024-07-15 10:10:34.682194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.243 [2024-07-15 10:10:34.682224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.243 [2024-07-15 10:10:34.682231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.243 [2024-07-15 10:10:34.685871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.243 [2024-07-15 10:10:34.685900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.243 [2024-07-15 10:10:34.685907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.243 [2024-07-15 10:10:34.687964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.243 [2024-07-15 10:10:34.687992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.243 [2024-07-15 10:10:34.688000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.243 [2024-07-15 10:10:34.691882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.243 [2024-07-15 10:10:34.691915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.243 [2024-07-15 10:10:34.691922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.243 [2024-07-15 10:10:34.695415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.243 [2024-07-15 10:10:34.695446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.244 [2024-07-15 10:10:34.695453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.244 [2024-07-15 10:10:34.698826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.244 [2024-07-15 10:10:34.698855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.244 [2024-07-15 10:10:34.698862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.244 [2024-07-15 10:10:34.702165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.244 [2024-07-15 10:10:34.702195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.244 [2024-07-15 10:10:34.702202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.244 [2024-07-15 10:10:34.705732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.244 [2024-07-15 10:10:34.705764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.244 [2024-07-15 10:10:34.705771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.244 [2024-07-15 10:10:34.708273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.244 [2024-07-15 10:10:34.708300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.244 [2024-07-15 10:10:34.708307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.244 [2024-07-15 10:10:34.711557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.244 [2024-07-15 10:10:34.711590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.244 [2024-07-15 10:10:34.711598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.244 [2024-07-15 10:10:34.715569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.244 [2024-07-15 10:10:34.715601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.244 [2024-07-15 10:10:34.715609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.244 [2024-07-15 10:10:34.718816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.244 [2024-07-15 10:10:34.718848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.244 [2024-07-15 10:10:34.718856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.244 [2024-07-15 10:10:34.721448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.244 [2024-07-15 10:10:34.721480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.244 [2024-07-15 10:10:34.721487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.244 [2024-07-15 10:10:34.724966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.244 [2024-07-15 10:10:34.725002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.244 [2024-07-15 10:10:34.725010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.244 [2024-07-15 10:10:34.727483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.244 [2024-07-15 10:10:34.727516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.244 [2024-07-15 10:10:34.727523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.244 [2024-07-15 10:10:34.730911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.244 [2024-07-15 10:10:34.730945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.244 [2024-07-15 10:10:34.730953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.244 [2024-07-15 10:10:34.734748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.244 [2024-07-15 10:10:34.734781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.244 [2024-07-15 10:10:34.734788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.244 [2024-07-15 10:10:34.737615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.244 [2024-07-15 10:10:34.737648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.244 [2024-07-15 10:10:34.737656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.244 [2024-07-15 10:10:34.740859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.244 [2024-07-15 10:10:34.740891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.244 [2024-07-15 10:10:34.740898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.244 [2024-07-15 10:10:34.744469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.244 [2024-07-15 10:10:34.744498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.244 [2024-07-15 10:10:34.744505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.244 [2024-07-15 10:10:34.747781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.244 [2024-07-15 10:10:34.747810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.244 [2024-07-15 10:10:34.747817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.244 [2024-07-15 10:10:34.750962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.244 [2024-07-15 10:10:34.750991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.244 [2024-07-15 10:10:34.750999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.244 [2024-07-15 10:10:34.754106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.244 [2024-07-15 10:10:34.754136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.244 [2024-07-15 10:10:34.754144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.244 [2024-07-15 10:10:34.756767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.244 [2024-07-15 10:10:34.756806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.244 [2024-07-15 10:10:34.756813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.244 [2024-07-15 10:10:34.759750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.244 [2024-07-15 10:10:34.759777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.244 [2024-07-15 10:10:34.759785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.244 [2024-07-15 10:10:34.762929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.244 [2024-07-15 10:10:34.762958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.244 [2024-07-15 10:10:34.762965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.244 [2024-07-15 10:10:34.765364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.244 [2024-07-15 10:10:34.765393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.244 [2024-07-15 10:10:34.765401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.244 [2024-07-15 10:10:34.768776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.244 [2024-07-15 10:10:34.768799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.244 [2024-07-15 10:10:34.768806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.244 [2024-07-15 10:10:34.772637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.245 [2024-07-15 10:10:34.772676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.245 [2024-07-15 10:10:34.772684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.245 [2024-07-15 10:10:34.775229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.245 [2024-07-15 10:10:34.775255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.245 [2024-07-15 10:10:34.775263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.245 [2024-07-15 10:10:34.778437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.245 [2024-07-15 10:10:34.778465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.245 [2024-07-15 10:10:34.778471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.245 [2024-07-15 10:10:34.782274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.245 [2024-07-15 10:10:34.782304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.245 [2024-07-15 10:10:34.782312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.245 [2024-07-15 10:10:34.785747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.245 [2024-07-15 10:10:34.785778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.245 [2024-07-15 10:10:34.785785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.245 [2024-07-15 10:10:34.789229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.245 [2024-07-15 10:10:34.789260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.245 [2024-07-15 10:10:34.789267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.245 [2024-07-15 10:10:34.791568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.245 [2024-07-15 10:10:34.791595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.245 [2024-07-15 10:10:34.791602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.245 [2024-07-15 10:10:34.795251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.245 [2024-07-15 10:10:34.795281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.245 [2024-07-15 10:10:34.795289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.245 [2024-07-15 10:10:34.798856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.245 [2024-07-15 10:10:34.798884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.245 [2024-07-15 10:10:34.798892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.245 [2024-07-15 10:10:34.802118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.245 [2024-07-15 10:10:34.802152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.245 [2024-07-15 10:10:34.802159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.245 [2024-07-15 10:10:34.805728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.245 [2024-07-15 10:10:34.805756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.245 [2024-07-15 10:10:34.805764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.245 [2024-07-15 10:10:34.809485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.245 [2024-07-15 10:10:34.809517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.245 [2024-07-15 10:10:34.809524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.245 [2024-07-15 10:10:34.812857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.245 [2024-07-15 10:10:34.812888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.245 [2024-07-15 10:10:34.812895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.245 [2024-07-15 10:10:34.816228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.245 [2024-07-15 10:10:34.816258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.245 [2024-07-15 10:10:34.816266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.245 [2024-07-15 10:10:34.819986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.245 [2024-07-15 10:10:34.820019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.245 [2024-07-15 10:10:34.820027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.245 [2024-07-15 10:10:34.822757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.245 [2024-07-15 10:10:34.822785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.245 [2024-07-15 10:10:34.822793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.506 [2024-07-15 10:10:34.826057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.506 [2024-07-15 10:10:34.826088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.506 [2024-07-15 10:10:34.826096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.507 [2024-07-15 10:10:34.829975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.507 [2024-07-15 10:10:34.830006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.507 [2024-07-15 10:10:34.830013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.507 [2024-07-15 10:10:34.833968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.507 [2024-07-15 10:10:34.833998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.507 [2024-07-15 10:10:34.834006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.507 [2024-07-15 10:10:34.837667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.507 [2024-07-15 10:10:34.837709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.507 [2024-07-15 10:10:34.837717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.507 [2024-07-15 10:10:34.840053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.507 [2024-07-15 10:10:34.840082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.507 [2024-07-15 10:10:34.840088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.507 [2024-07-15 10:10:34.843861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.507 [2024-07-15 10:10:34.843889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.507 [2024-07-15 10:10:34.843896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.507 [2024-07-15 10:10:34.847623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.507 [2024-07-15 10:10:34.847652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.507 [2024-07-15 10:10:34.847670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.507 [2024-07-15 10:10:34.851496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.507 [2024-07-15 10:10:34.851527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.507 [2024-07-15 10:10:34.851534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.507 [2024-07-15 10:10:34.854128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.507 [2024-07-15 10:10:34.854155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.507 [2024-07-15 10:10:34.854162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.507 [2024-07-15 10:10:34.857346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.507 [2024-07-15 10:10:34.857376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.507 [2024-07-15 10:10:34.857383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.507 [2024-07-15 10:10:34.860965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.507 [2024-07-15 10:10:34.860997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.507 [2024-07-15 10:10:34.861005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.507 [2024-07-15 10:10:34.864175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.507 [2024-07-15 10:10:34.864204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.507 [2024-07-15 10:10:34.864211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.507 [2024-07-15 10:10:34.866438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.507 [2024-07-15 10:10:34.866466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.507 [2024-07-15 10:10:34.866474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.507 [2024-07-15 10:10:34.869804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.507 [2024-07-15 10:10:34.869832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.507 [2024-07-15 10:10:34.869839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.507 [2024-07-15 10:10:34.872729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.507 [2024-07-15 10:10:34.872759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.507 [2024-07-15 10:10:34.872766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.507 [2024-07-15 10:10:34.876016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.507 [2024-07-15 10:10:34.876047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.507 [2024-07-15 10:10:34.876054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.507 [2024-07-15 10:10:34.879131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.507 [2024-07-15 10:10:34.879160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.507 [2024-07-15 10:10:34.879168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.507 [2024-07-15 10:10:34.882128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.507 [2024-07-15 10:10:34.882158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.507 [2024-07-15 10:10:34.882165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.507 [2024-07-15 10:10:34.884880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.507 [2024-07-15 10:10:34.884913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.507 [2024-07-15 10:10:34.884921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.507 [2024-07-15 10:10:34.888129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.507 [2024-07-15 10:10:34.888158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.507 [2024-07-15 10:10:34.888166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.507 [2024-07-15 10:10:34.890892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.507 [2024-07-15 10:10:34.890920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.507 [2024-07-15 10:10:34.890928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.507 [2024-07-15 10:10:34.894144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.507 [2024-07-15 10:10:34.894173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.507 [2024-07-15 10:10:34.894180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.507 [2024-07-15 10:10:34.896729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.507 [2024-07-15 10:10:34.896757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.507 [2024-07-15 10:10:34.896765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.507 [2024-07-15 10:10:34.899405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.507 [2024-07-15 10:10:34.899433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.507 [2024-07-15 10:10:34.899440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.507 [2024-07-15 10:10:34.903295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.507 [2024-07-15 10:10:34.903326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.507 [2024-07-15 10:10:34.903333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.507 [2024-07-15 10:10:34.907139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.507 [2024-07-15 10:10:34.907168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.507 [2024-07-15 10:10:34.907175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.507 [2024-07-15 10:10:34.909542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.507 [2024-07-15 10:10:34.909573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.507 [2024-07-15 10:10:34.909581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.507 [2024-07-15 10:10:34.913235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.507 [2024-07-15 10:10:34.913267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.507 [2024-07-15 10:10:34.913274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.507 [2024-07-15 10:10:34.916608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.507 [2024-07-15 10:10:34.916637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.507 [2024-07-15 10:10:34.916644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.507 [2024-07-15 10:10:34.919835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.507 [2024-07-15 10:10:34.919863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.508 [2024-07-15 10:10:34.919871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.508 [2024-07-15 10:10:34.923444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.508 [2024-07-15 10:10:34.923474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.508 [2024-07-15 10:10:34.923481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.508 [2024-07-15 10:10:34.926848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.508 [2024-07-15 10:10:34.926878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.508 [2024-07-15 10:10:34.926886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.508 [2024-07-15 10:10:34.930375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.508 [2024-07-15 10:10:34.930406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.508 [2024-07-15 10:10:34.930413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.508 [2024-07-15 10:10:34.934283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.508 [2024-07-15 10:10:34.934315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.508 [2024-07-15 10:10:34.934322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.508 [2024-07-15 10:10:34.937977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.508 [2024-07-15 10:10:34.938008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.508 [2024-07-15 10:10:34.938016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.508 [2024-07-15 10:10:34.940291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.508 [2024-07-15 10:10:34.940320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.508 [2024-07-15 10:10:34.940328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.508 [2024-07-15 10:10:34.943848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.508 [2024-07-15 10:10:34.943880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.508 [2024-07-15 10:10:34.943887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.508 [2024-07-15 10:10:34.947473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.508 [2024-07-15 10:10:34.947506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.508 [2024-07-15 10:10:34.947513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.508 [2024-07-15 10:10:34.951100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.508 [2024-07-15 10:10:34.951132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.508 [2024-07-15 10:10:34.951140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.508 [2024-07-15 10:10:34.954539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.508 [2024-07-15 10:10:34.954570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.508 [2024-07-15 10:10:34.954578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.508 [2024-07-15 10:10:34.956560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.508 [2024-07-15 10:10:34.956600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.508 [2024-07-15 10:10:34.956607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.508 [2024-07-15 10:10:34.960797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.508 [2024-07-15 10:10:34.960831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.508 [2024-07-15 10:10:34.960839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.508 [2024-07-15 10:10:34.964184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.508 [2024-07-15 10:10:34.964214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.508 [2024-07-15 10:10:34.964222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.508 [2024-07-15 10:10:34.966436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.508 [2024-07-15 10:10:34.966464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.508 [2024-07-15 10:10:34.966472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.508 [2024-07-15 10:10:34.970111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.508 [2024-07-15 10:10:34.970143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.508 [2024-07-15 10:10:34.970151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.508 [2024-07-15 10:10:34.973869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.508 [2024-07-15 10:10:34.973900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.508 [2024-07-15 10:10:34.973908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.508 [2024-07-15 10:10:34.977357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.508 [2024-07-15 10:10:34.977390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.508 [2024-07-15 10:10:34.977399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.508 [2024-07-15 10:10:34.981365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.508 [2024-07-15 10:10:34.981401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.508 [2024-07-15 10:10:34.981410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.508 [2024-07-15 10:10:34.985192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.508 [2024-07-15 10:10:34.985227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.508 [2024-07-15 10:10:34.985235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.508 [2024-07-15 10:10:34.988874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.508 [2024-07-15 10:10:34.988905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.508 [2024-07-15 10:10:34.988913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.508 [2024-07-15 10:10:34.992797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.508 [2024-07-15 10:10:34.992829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.508 [2024-07-15 10:10:34.992838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.508 [2024-07-15 10:10:34.995052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.508 [2024-07-15 10:10:34.995080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.508 [2024-07-15 10:10:34.995088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.508 [2024-07-15 10:10:34.998916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.508 [2024-07-15 10:10:34.998946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.508 [2024-07-15 10:10:34.998954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.508 [2024-07-15 10:10:35.001521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.508 [2024-07-15 10:10:35.001551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.508 [2024-07-15 10:10:35.001559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.508 [2024-07-15 10:10:35.004872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.508 [2024-07-15 10:10:35.004902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.508 [2024-07-15 10:10:35.004909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.508 [2024-07-15 10:10:35.008743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.508 [2024-07-15 10:10:35.008770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.508 [2024-07-15 10:10:35.008778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.508 [2024-07-15 10:10:35.011739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.508 [2024-07-15 10:10:35.011764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.508 [2024-07-15 10:10:35.011771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.508 [2024-07-15 10:10:35.015220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.508 [2024-07-15 10:10:35.015249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.508 [2024-07-15 10:10:35.015257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.509 [2024-07-15 10:10:35.018576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.509 [2024-07-15 10:10:35.018605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.509 [2024-07-15 10:10:35.018612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.509 [2024-07-15 10:10:35.021880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.509 [2024-07-15 10:10:35.021910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.509 [2024-07-15 10:10:35.021917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.509 [2024-07-15 10:10:35.024594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.509 [2024-07-15 10:10:35.024624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.509 [2024-07-15 10:10:35.024632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.509 [2024-07-15 10:10:35.027963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.509 [2024-07-15 10:10:35.027994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.509 [2024-07-15 10:10:35.028002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.509 [2024-07-15 10:10:35.030897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.509 [2024-07-15 10:10:35.030927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.509 [2024-07-15 10:10:35.030934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.509 [2024-07-15 10:10:35.034335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.509 [2024-07-15 10:10:35.034367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.509 [2024-07-15 10:10:35.034375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.509 [2024-07-15 10:10:35.037416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.509 [2024-07-15 10:10:35.037448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.509 [2024-07-15 10:10:35.037457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.509 [2024-07-15 10:10:35.040582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.509 [2024-07-15 10:10:35.040612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.509 [2024-07-15 10:10:35.040620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.509 [2024-07-15 10:10:35.043416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.509 [2024-07-15 10:10:35.043445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.509 [2024-07-15 10:10:35.043453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.509 [2024-07-15 10:10:35.046280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.509 [2024-07-15 10:10:35.046309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.509 [2024-07-15 10:10:35.046317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.509 [2024-07-15 10:10:35.049272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.509 [2024-07-15 10:10:35.049306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.509 [2024-07-15 10:10:35.049313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.509 [2024-07-15 10:10:35.052894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.509 [2024-07-15 10:10:35.052928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.509 [2024-07-15 10:10:35.052935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.509 [2024-07-15 10:10:35.056285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.509 [2024-07-15 10:10:35.056315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.509 [2024-07-15 10:10:35.056322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.509 [2024-07-15 10:10:35.059524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.509 [2024-07-15 10:10:35.059556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.509 [2024-07-15 10:10:35.059563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.509 [2024-07-15 10:10:35.063263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.509 [2024-07-15 10:10:35.063295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.509 [2024-07-15 10:10:35.063303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.509 [2024-07-15 10:10:35.066761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.509 [2024-07-15 10:10:35.066788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.509 [2024-07-15 10:10:35.066796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.509 [2024-07-15 10:10:35.070377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.509 [2024-07-15 10:10:35.070409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.509 [2024-07-15 10:10:35.070418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.509 [2024-07-15 10:10:35.073967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.509 [2024-07-15 10:10:35.073998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.509 [2024-07-15 10:10:35.074006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.509 [2024-07-15 10:10:35.077723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.509 [2024-07-15 10:10:35.077755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.509 [2024-07-15 10:10:35.077763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.509 [2024-07-15 10:10:35.080605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.509 [2024-07-15 10:10:35.080634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.509 [2024-07-15 10:10:35.080643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.509 [2024-07-15 10:10:35.083909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.509 [2024-07-15 10:10:35.083940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.509 [2024-07-15 10:10:35.083948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.509 [2024-07-15 10:10:35.087218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.509 [2024-07-15 10:10:35.087250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.509 [2024-07-15 10:10:35.087258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.771 [2024-07-15 10:10:35.090387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.771 [2024-07-15 10:10:35.090418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.771 [2024-07-15 10:10:35.090426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.771 [2024-07-15 10:10:35.094236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.771 [2024-07-15 10:10:35.094268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.771 [2024-07-15 10:10:35.094276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.771 [2024-07-15 10:10:35.097246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.771 [2024-07-15 10:10:35.097278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.771 [2024-07-15 10:10:35.097287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.771 [2024-07-15 10:10:35.100621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.771 [2024-07-15 10:10:35.100652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.771 [2024-07-15 10:10:35.100669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.771 [2024-07-15 10:10:35.104651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.771 [2024-07-15 10:10:35.104689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.771 [2024-07-15 10:10:35.104697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.771 [2024-07-15 10:10:35.108147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.771 [2024-07-15 10:10:35.108177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.771 [2024-07-15 10:10:35.108184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.771 [2024-07-15 10:10:35.110193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.772 [2024-07-15 10:10:35.110222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.772 [2024-07-15 10:10:35.110230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.772 [2024-07-15 10:10:35.114224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.772 [2024-07-15 10:10:35.114258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.772 [2024-07-15 10:10:35.114266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.772 [2024-07-15 10:10:35.117873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.772 [2024-07-15 10:10:35.117904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.772 [2024-07-15 10:10:35.117911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.772 [2024-07-15 10:10:35.121325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.772 [2024-07-15 10:10:35.121356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.772 [2024-07-15 10:10:35.121363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.772 [2024-07-15 10:10:35.124485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.772 [2024-07-15 10:10:35.124514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.772 [2024-07-15 10:10:35.124522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.772 [2024-07-15 10:10:35.128078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.772 [2024-07-15 10:10:35.128107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.772 [2024-07-15 10:10:35.128115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.772 [2024-07-15 10:10:35.131752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.772 [2024-07-15 10:10:35.131780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.772 [2024-07-15 10:10:35.131787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.772 [2024-07-15 10:10:35.135387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.772 [2024-07-15 10:10:35.135417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.772 [2024-07-15 10:10:35.135425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.772 [2024-07-15 10:10:35.138078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.772 [2024-07-15 10:10:35.138107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.772 [2024-07-15 10:10:35.138114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.772 [2024-07-15 10:10:35.141354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.772 [2024-07-15 10:10:35.141386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.772 [2024-07-15 10:10:35.141394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.772 [2024-07-15 10:10:35.145433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.772 [2024-07-15 10:10:35.145466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.772 [2024-07-15 10:10:35.145474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.772 [2024-07-15 10:10:35.149411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.772 [2024-07-15 10:10:35.149444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.772 [2024-07-15 10:10:35.149452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.772 [2024-07-15 10:10:35.153255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.772 [2024-07-15 10:10:35.153287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.772 [2024-07-15 10:10:35.153296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.772 [2024-07-15 10:10:35.156820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.772 [2024-07-15 10:10:35.156852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.772 [2024-07-15 10:10:35.156860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.772 [2024-07-15 10:10:35.160217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.772 [2024-07-15 10:10:35.160247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.772 [2024-07-15 10:10:35.160255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.772 [2024-07-15 10:10:35.163635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.772 [2024-07-15 10:10:35.163674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.772 [2024-07-15 10:10:35.163682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.772 [2024-07-15 10:10:35.165910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.772 [2024-07-15 10:10:35.165934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.772 [2024-07-15 10:10:35.165942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.772 [2024-07-15 10:10:35.169005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.772 [2024-07-15 10:10:35.169037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.772 [2024-07-15 10:10:35.169044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.772 [2024-07-15 10:10:35.172648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.772 [2024-07-15 10:10:35.172690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.772 [2024-07-15 10:10:35.172697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.772 [2024-07-15 10:10:35.176424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.772 [2024-07-15 10:10:35.176453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.772 [2024-07-15 10:10:35.176461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.772 [2024-07-15 10:10:35.179992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.772 [2024-07-15 10:10:35.180024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.772 [2024-07-15 10:10:35.180032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.772 [2024-07-15 10:10:35.183824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.772 [2024-07-15 10:10:35.183866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.772 [2024-07-15 10:10:35.183874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.772 [2024-07-15 10:10:35.187533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.772 [2024-07-15 10:10:35.187566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.772 [2024-07-15 10:10:35.187574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.772 [2024-07-15 10:10:35.190205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.772 [2024-07-15 10:10:35.190235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.772 [2024-07-15 10:10:35.190243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.772 [2024-07-15 10:10:35.194418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.773 [2024-07-15 10:10:35.194451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.773 [2024-07-15 10:10:35.194459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.773 [2024-07-15 10:10:35.198744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.773 [2024-07-15 10:10:35.198776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.773 [2024-07-15 10:10:35.198784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.773 [2024-07-15 10:10:35.202687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.773 [2024-07-15 10:10:35.202717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.773 [2024-07-15 10:10:35.202725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.773 [2024-07-15 10:10:35.205427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.773 [2024-07-15 10:10:35.205456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.773 [2024-07-15 10:10:35.205464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.773 [2024-07-15 10:10:35.208954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.773 [2024-07-15 10:10:35.208985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.773 [2024-07-15 10:10:35.208993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.773 [2024-07-15 10:10:35.212960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.773 [2024-07-15 10:10:35.212994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.773 [2024-07-15 10:10:35.213001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.773 [2024-07-15 10:10:35.216838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.773 [2024-07-15 10:10:35.216871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.773 [2024-07-15 10:10:35.216878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.773 [2024-07-15 10:10:35.220638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.773 [2024-07-15 10:10:35.220682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.773 [2024-07-15 10:10:35.220689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.773 [2024-07-15 10:10:35.222995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.773 [2024-07-15 10:10:35.223021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.773 [2024-07-15 10:10:35.223028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.773 [2024-07-15 10:10:35.226410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.773 [2024-07-15 10:10:35.226441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.773 [2024-07-15 10:10:35.226449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.773 [2024-07-15 10:10:35.230054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.773 [2024-07-15 10:10:35.230084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.773 [2024-07-15 10:10:35.230092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.773 [2024-07-15 10:10:35.233742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.773 [2024-07-15 10:10:35.233771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.773 [2024-07-15 10:10:35.233779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.773 [2024-07-15 10:10:35.237344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.773 [2024-07-15 10:10:35.237375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.773 [2024-07-15 10:10:35.237382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.773 [2024-07-15 10:10:35.241052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.773 [2024-07-15 10:10:35.241084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.773 [2024-07-15 10:10:35.241092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.773 [2024-07-15 10:10:35.244885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.773 [2024-07-15 10:10:35.244917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.773 [2024-07-15 10:10:35.244926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.773 [2024-07-15 10:10:35.247647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.773 [2024-07-15 10:10:35.247683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.773 [2024-07-15 10:10:35.247691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.773 [2024-07-15 10:10:35.250650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.773 [2024-07-15 10:10:35.250685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.773 [2024-07-15 10:10:35.250692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.773 [2024-07-15 10:10:35.254234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.773 [2024-07-15 10:10:35.254264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.773 [2024-07-15 10:10:35.254271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.773 [2024-07-15 10:10:35.257867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.773 [2024-07-15 10:10:35.257896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.773 [2024-07-15 10:10:35.257903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.773 [2024-07-15 10:10:35.261563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.773 [2024-07-15 10:10:35.261597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.773 [2024-07-15 10:10:35.261605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.773 [2024-07-15 10:10:35.265549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.773 [2024-07-15 10:10:35.265582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.773 [2024-07-15 10:10:35.265590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.773 [2024-07-15 10:10:35.269093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.773 [2024-07-15 10:10:35.269124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.773 [2024-07-15 10:10:35.269132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.773 [2024-07-15 10:10:35.271328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.773 [2024-07-15 10:10:35.271358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.773 [2024-07-15 10:10:35.271366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.773 [2024-07-15 10:10:35.275509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.773 [2024-07-15 10:10:35.275544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.773 [2024-07-15 10:10:35.275552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.773 [2024-07-15 10:10:35.279345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.773 [2024-07-15 10:10:35.279377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.773 [2024-07-15 10:10:35.279385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.773 [2024-07-15 10:10:35.283080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.773 [2024-07-15 10:10:35.283110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.773 [2024-07-15 10:10:35.283118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.773 [2024-07-15 10:10:35.286057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.773 [2024-07-15 10:10:35.286088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.773 [2024-07-15 10:10:35.286095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.773 [2024-07-15 10:10:35.289147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.773 [2024-07-15 10:10:35.289177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.774 [2024-07-15 10:10:35.289185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.774 [2024-07-15 10:10:35.291264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.774 [2024-07-15 10:10:35.291293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.774 [2024-07-15 10:10:35.291300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.774 [2024-07-15 10:10:35.294976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.774 [2024-07-15 10:10:35.295009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.774 [2024-07-15 10:10:35.295016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.774 [2024-07-15 10:10:35.298335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.774 [2024-07-15 10:10:35.298368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.774 [2024-07-15 10:10:35.298376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.774 [2024-07-15 10:10:35.301273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.774 [2024-07-15 10:10:35.301305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.774 [2024-07-15 10:10:35.301313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.774 [2024-07-15 10:10:35.304336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.774 [2024-07-15 10:10:35.304372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.774 [2024-07-15 10:10:35.304380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.774 [2024-07-15 10:10:35.307523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.774 [2024-07-15 10:10:35.307552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.774 [2024-07-15 10:10:35.307560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.774 [2024-07-15 10:10:35.310795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.774 [2024-07-15 10:10:35.310828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.774 [2024-07-15 10:10:35.310836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.774 [2024-07-15 10:10:35.313936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.774 [2024-07-15 10:10:35.313968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.774 [2024-07-15 10:10:35.313976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.774 [2024-07-15 10:10:35.317196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.774 [2024-07-15 10:10:35.317228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.774 [2024-07-15 10:10:35.317236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.774 [2024-07-15 10:10:35.319923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.774 [2024-07-15 10:10:35.319950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.774 [2024-07-15 10:10:35.319958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.774 [2024-07-15 10:10:35.323210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.774 [2024-07-15 10:10:35.323242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.774 [2024-07-15 10:10:35.323249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.774 [2024-07-15 10:10:35.326992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.774 [2024-07-15 10:10:35.327027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.774 [2024-07-15 10:10:35.327035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.774 [2024-07-15 10:10:35.330548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.774 [2024-07-15 10:10:35.330580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.774 [2024-07-15 10:10:35.330588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.774 [2024-07-15 10:10:35.332595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.774 [2024-07-15 10:10:35.332624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.774 [2024-07-15 10:10:35.332632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.774 [2024-07-15 10:10:35.336087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.774 [2024-07-15 10:10:35.336119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.774 [2024-07-15 10:10:35.336127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.774 [2024-07-15 10:10:35.339197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.774 [2024-07-15 10:10:35.339229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.774 [2024-07-15 10:10:35.339236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.774 [2024-07-15 10:10:35.342350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.774 [2024-07-15 10:10:35.342379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.774 [2024-07-15 10:10:35.342387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.774 [2024-07-15 10:10:35.345128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.774 [2024-07-15 10:10:35.345158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.774 [2024-07-15 10:10:35.345165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.774 [2024-07-15 10:10:35.348177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.774 [2024-07-15 10:10:35.348220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.774 [2024-07-15 10:10:35.348227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.774 [2024-07-15 10:10:35.351299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:21.774 [2024-07-15 10:10:35.351331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.774 [2024-07-15 10:10:35.351338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.035 [2024-07-15 10:10:35.354038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:22.035 [2024-07-15 10:10:35.354069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.035 [2024-07-15 10:10:35.354076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.035 [2024-07-15 10:10:35.357612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fcb380) 00:29:22.035 [2024-07-15 10:10:35.357645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.035 [2024-07-15 10:10:35.357653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.035 00:29:22.035 Latency(us) 00:29:22.035 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:22.035 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:22.035 nvme0n1 : 2.00 9253.86 1156.73 0.00 0.00 1725.91 482.93 4893.74 00:29:22.035 =================================================================================================================== 00:29:22.035 Total : 9253.86 1156.73 0.00 0.00 1725.91 482.93 4893.74 00:29:22.035 0 00:29:22.035 10:10:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:22.035 10:10:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:22.035 10:10:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:22.035 | .driver_specific 00:29:22.035 | .nvme_error 00:29:22.035 | .status_code 00:29:22.035 | .command_transient_transport_error' 00:29:22.035 10:10:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:22.294 10:10:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 597 > 0 )) 00:29:22.294 10:10:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93550 00:29:22.294 10:10:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 93550 ']' 00:29:22.294 10:10:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 93550 00:29:22.294 10:10:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:22.294 10:10:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:22.294 10:10:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93550 00:29:22.294 10:10:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:22.294 10:10:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:22.294 killing process with pid 93550 00:29:22.294 10:10:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93550' 00:29:22.294 10:10:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 93550 00:29:22.294 Received shutdown signal, test time was about 2.000000 seconds 00:29:22.294 00:29:22.294 Latency(us) 00:29:22.294 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:22.294 =================================================================================================================== 00:29:22.294 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:22.294 10:10:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 93550 00:29:22.294 10:10:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:29:22.294 10:10:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:22.294 10:10:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:22.294 10:10:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:22.294 10:10:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:22.294 10:10:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93640 00:29:22.294 10:10:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93640 /var/tmp/bperf.sock 00:29:22.294 10:10:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:22.294 10:10:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 93640 ']' 00:29:22.294 10:10:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:22.294 10:10:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:22.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:22.294 10:10:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:22.294 10:10:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:22.294 10:10:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:22.553 [2024-07-15 10:10:35.900227] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:29:22.553 [2024-07-15 10:10:35.900295] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93640 ] 00:29:22.553 [2024-07-15 10:10:36.032814] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:22.812 [2024-07-15 10:10:36.138084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:23.380 10:10:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:23.380 10:10:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:29:23.380 10:10:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:23.380 10:10:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:23.638 10:10:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:23.638 10:10:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:23.638 10:10:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:23.638 10:10:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:23.638 10:10:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:23.638 10:10:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:23.897 nvme0n1 00:29:23.897 10:10:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:23.897 10:10:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:23.897 10:10:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:23.897 10:10:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:23.897 10:10:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:23.897 10:10:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:23.897 Running I/O for 2 seconds... 00:29:23.897 [2024-07-15 10:10:37.456656] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190ee5c8 00:29:23.897 [2024-07-15 10:10:37.457515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.897 [2024-07-15 10:10:37.457547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:23.897 [2024-07-15 10:10:37.467220] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190e23b8 00:29:23.897 [2024-07-15 10:10:37.467900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.897 [2024-07-15 10:10:37.467937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:23.897 [2024-07-15 10:10:37.480532] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190ebb98 00:29:24.157 [2024-07-15 10:10:37.482013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.157 [2024-07-15 10:10:37.482048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:24.157 [2024-07-15 10:10:37.490399] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f1868 00:29:24.157 [2024-07-15 10:10:37.492060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.157 [2024-07-15 10:10:37.492094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:24.157 [2024-07-15 10:10:37.499975] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190e2c28 00:29:24.157 [2024-07-15 10:10:37.500687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.157 [2024-07-15 10:10:37.500717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:24.157 [2024-07-15 10:10:37.511216] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190e8088 00:29:24.157 [2024-07-15 10:10:37.511934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:7930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.157 [2024-07-15 10:10:37.511964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:24.157 [2024-07-15 10:10:37.523073] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f0350 00:29:24.157 [2024-07-15 10:10:37.523916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:25059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.157 [2024-07-15 10:10:37.523947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:24.157 [2024-07-15 10:10:37.533552] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190e6738 00:29:24.157 [2024-07-15 10:10:37.534266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:4778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.157 [2024-07-15 10:10:37.534296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:24.157 [2024-07-15 10:10:37.546781] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f2d80 00:29:24.157 [2024-07-15 10:10:37.548270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:6169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.157 [2024-07-15 10:10:37.548303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:24.157 [2024-07-15 10:10:37.557205] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190df988 00:29:24.157 [2024-07-15 10:10:37.558369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.157 [2024-07-15 10:10:37.558407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:24.157 [2024-07-15 10:10:37.568046] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190fe720 00:29:24.157 [2024-07-15 10:10:37.569087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.157 [2024-07-15 10:10:37.569118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:24.157 [2024-07-15 10:10:37.579875] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f9f68 00:29:24.157 [2024-07-15 10:10:37.581362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:19614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.157 [2024-07-15 10:10:37.581392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:24.157 [2024-07-15 10:10:37.590471] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190eee38 00:29:24.157 [2024-07-15 10:10:37.591812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:14787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.157 [2024-07-15 10:10:37.591840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:24.157 [2024-07-15 10:10:37.601223] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f46d0 00:29:24.157 [2024-07-15 10:10:37.602088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.157 [2024-07-15 10:10:37.602119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:24.157 [2024-07-15 10:10:37.612619] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190dfdc0 00:29:24.157 [2024-07-15 10:10:37.613790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.157 [2024-07-15 10:10:37.613834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:24.157 [2024-07-15 10:10:37.623286] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190ddc00 00:29:24.157 [2024-07-15 10:10:37.624305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.157 [2024-07-15 10:10:37.624333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:24.157 [2024-07-15 10:10:37.636089] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190edd58 00:29:24.157 [2024-07-15 10:10:37.637748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.157 [2024-07-15 10:10:37.637793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:24.157 [2024-07-15 10:10:37.644217] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190de470 00:29:24.157 [2024-07-15 10:10:37.644953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.157 [2024-07-15 10:10:37.644987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:24.157 [2024-07-15 10:10:37.655964] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190e5ec8 00:29:24.157 [2024-07-15 10:10:37.656771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:15129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.157 [2024-07-15 10:10:37.656804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:24.157 [2024-07-15 10:10:37.669465] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f6cc8 00:29:24.157 [2024-07-15 10:10:37.670847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.157 [2024-07-15 10:10:37.670877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:24.157 [2024-07-15 10:10:37.679167] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190eff18 00:29:24.157 [2024-07-15 10:10:37.679735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.157 [2024-07-15 10:10:37.679765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:24.157 [2024-07-15 10:10:37.693129] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190eea00 00:29:24.158 [2024-07-15 10:10:37.694953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.158 [2024-07-15 10:10:37.694983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:24.158 [2024-07-15 10:10:37.701146] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190eaef0 00:29:24.158 [2024-07-15 10:10:37.701909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:18886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.158 [2024-07-15 10:10:37.701937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:24.158 [2024-07-15 10:10:37.715237] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190ea680 00:29:24.158 [2024-07-15 10:10:37.717058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:14487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.158 [2024-07-15 10:10:37.717090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:24.158 [2024-07-15 10:10:37.723203] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190e9e10 00:29:24.158 [2024-07-15 10:10:37.723910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:25276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.158 [2024-07-15 10:10:37.723939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:24.158 [2024-07-15 10:10:37.735808] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f7538 00:29:24.158 [2024-07-15 10:10:37.736710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:16558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.158 [2024-07-15 10:10:37.736740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:24.417 [2024-07-15 10:10:37.744670] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190e0ea0 00:29:24.417 [2024-07-15 10:10:37.745701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.417 [2024-07-15 10:10:37.745730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:24.417 [2024-07-15 10:10:37.755849] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f7538 00:29:24.417 [2024-07-15 10:10:37.757371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.417 [2024-07-15 10:10:37.757402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:24.417 [2024-07-15 10:10:37.762670] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190e4de8 00:29:24.417 [2024-07-15 10:10:37.763288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.417 [2024-07-15 10:10:37.763313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:24.417 [2024-07-15 10:10:37.772176] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f0ff8 00:29:24.417 [2024-07-15 10:10:37.772937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.417 [2024-07-15 10:10:37.772963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:24.417 [2024-07-15 10:10:37.781225] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190eaab8 00:29:24.417 [2024-07-15 10:10:37.782024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:14620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.417 [2024-07-15 10:10:37.782050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:24.417 [2024-07-15 10:10:37.789969] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f6890 00:29:24.417 [2024-07-15 10:10:37.790596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.418 [2024-07-15 10:10:37.790621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:24.418 [2024-07-15 10:10:37.800324] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190e27f0 00:29:24.418 [2024-07-15 10:10:37.801557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.418 [2024-07-15 10:10:37.801587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:24.418 [2024-07-15 10:10:37.809446] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190e23b8 00:29:24.418 [2024-07-15 10:10:37.810225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:25581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.418 [2024-07-15 10:10:37.810253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:24.418 [2024-07-15 10:10:37.818602] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190e5220 00:29:24.418 [2024-07-15 10:10:37.819612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.418 [2024-07-15 10:10:37.819640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:24.418 [2024-07-15 10:10:37.827192] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190ecc78 00:29:24.418 [2024-07-15 10:10:37.828097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.418 [2024-07-15 10:10:37.828123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:24.418 [2024-07-15 10:10:37.835866] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190e49b0 00:29:24.418 [2024-07-15 10:10:37.836647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.418 [2024-07-15 10:10:37.836680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:24.418 [2024-07-15 10:10:37.846139] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190fd208 00:29:24.418 [2024-07-15 10:10:37.847470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.418 [2024-07-15 10:10:37.847507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:24.418 [2024-07-15 10:10:37.854460] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f4f40 00:29:24.418 [2024-07-15 10:10:37.856042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:9562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.418 [2024-07-15 10:10:37.856071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:24.418 [2024-07-15 10:10:37.864471] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190ebb98 00:29:24.418 [2024-07-15 10:10:37.865314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.418 [2024-07-15 10:10:37.865344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:24.418 [2024-07-15 10:10:37.872826] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190ef6a8 00:29:24.418 [2024-07-15 10:10:37.873844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.418 [2024-07-15 10:10:37.873870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:24.418 [2024-07-15 10:10:37.881992] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190fc560 00:29:24.418 [2024-07-15 10:10:37.882907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.418 [2024-07-15 10:10:37.882933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:24.418 [2024-07-15 10:10:37.891378] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190e0a68 00:29:24.418 [2024-07-15 10:10:37.892283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.418 [2024-07-15 10:10:37.892307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:24.418 [2024-07-15 10:10:37.900363] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190ee190 00:29:24.418 [2024-07-15 10:10:37.901358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.418 [2024-07-15 10:10:37.901387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:24.418 [2024-07-15 10:10:37.911797] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190df550 00:29:24.418 [2024-07-15 10:10:37.913334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:4793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.418 [2024-07-15 10:10:37.913362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:24.418 [2024-07-15 10:10:37.918457] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f20d8 00:29:24.418 [2024-07-15 10:10:37.919138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.418 [2024-07-15 10:10:37.919165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:24.418 [2024-07-15 10:10:37.929396] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190e4140 00:29:24.418 [2024-07-15 10:10:37.930520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:11580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.418 [2024-07-15 10:10:37.930552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:24.418 [2024-07-15 10:10:37.938105] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190e0ea0 00:29:24.418 [2024-07-15 10:10:37.939049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:25555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.418 [2024-07-15 10:10:37.939076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:24.418 [2024-07-15 10:10:37.946806] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190ee190 00:29:24.418 [2024-07-15 10:10:37.947635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.418 [2024-07-15 10:10:37.947673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:24.418 [2024-07-15 10:10:37.957797] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190e1710 00:29:24.418 [2024-07-15 10:10:37.959347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:4071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.418 [2024-07-15 10:10:37.959377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:24.418 [2024-07-15 10:10:37.967458] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190e49b0 00:29:24.418 [2024-07-15 10:10:37.969009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.418 [2024-07-15 10:10:37.969037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.418 [2024-07-15 10:10:37.976756] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190e12d8 00:29:24.418 [2024-07-15 10:10:37.978288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.418 [2024-07-15 10:10:37.978316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:24.418 [2024-07-15 10:10:37.986489] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190e8d30 00:29:24.418 [2024-07-15 10:10:37.987949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.418 [2024-07-15 10:10:37.987978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:24.418 [2024-07-15 10:10:37.995092] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190fda78 00:29:24.418 [2024-07-15 10:10:37.996716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.418 [2024-07-15 10:10:37.996745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.678 [2024-07-15 10:10:38.005561] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190df988 00:29:24.678 [2024-07-15 10:10:38.006523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.678 [2024-07-15 10:10:38.006555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.678 [2024-07-15 10:10:38.014289] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f92c0 00:29:24.678 [2024-07-15 10:10:38.015334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.678 [2024-07-15 10:10:38.015364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:24.678 [2024-07-15 10:10:38.023519] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190dece0 00:29:24.678 [2024-07-15 10:10:38.024312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:6087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.678 [2024-07-15 10:10:38.024338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:24.678 [2024-07-15 10:10:38.034011] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190dfdc0 00:29:24.678 [2024-07-15 10:10:38.035255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.678 [2024-07-15 10:10:38.035286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:24.678 [2024-07-15 10:10:38.043280] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190fc128 00:29:24.678 [2024-07-15 10:10:38.044403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.678 [2024-07-15 10:10:38.044431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:24.678 [2024-07-15 10:10:38.052808] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190e0630 00:29:24.678 [2024-07-15 10:10:38.053843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.678 [2024-07-15 10:10:38.053870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:24.678 [2024-07-15 10:10:38.062673] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190e5658 00:29:24.678 [2024-07-15 10:10:38.063632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:19516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.678 [2024-07-15 10:10:38.063667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:24.678 [2024-07-15 10:10:38.072310] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190ff3c8 00:29:24.678 [2024-07-15 10:10:38.073127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.678 [2024-07-15 10:10:38.073157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.678 [2024-07-15 10:10:38.082530] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190eff18 00:29:24.678 [2024-07-15 10:10:38.083021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.678 [2024-07-15 10:10:38.083049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:24.678 [2024-07-15 10:10:38.093337] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190edd58 00:29:24.678 [2024-07-15 10:10:38.094537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:18852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.678 [2024-07-15 10:10:38.094565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:24.678 [2024-07-15 10:10:38.102368] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190fda78 00:29:24.678 [2024-07-15 10:10:38.103367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.678 [2024-07-15 10:10:38.103394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:24.678 [2024-07-15 10:10:38.112641] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190e38d0 00:29:24.678 [2024-07-15 10:10:38.114086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.678 [2024-07-15 10:10:38.114114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:24.678 [2024-07-15 10:10:38.121819] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190e88f8 00:29:24.678 [2024-07-15 10:10:38.123234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.678 [2024-07-15 10:10:38.123262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:24.678 [2024-07-15 10:10:38.131509] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190e4de8 00:29:24.678 [2024-07-15 10:10:38.133117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.678 [2024-07-15 10:10:38.133149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:24.678 [2024-07-15 10:10:38.138584] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f96f8 00:29:24.678 [2024-07-15 10:10:38.139338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.678 [2024-07-15 10:10:38.139369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:24.678 [2024-07-15 10:10:38.149718] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190ed0b0 00:29:24.678 [2024-07-15 10:10:38.150935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.678 [2024-07-15 10:10:38.150968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:24.678 [2024-07-15 10:10:38.157758] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190fac10 00:29:24.678 [2024-07-15 10:10:38.159174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.678 [2024-07-15 10:10:38.159204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:24.678 [2024-07-15 10:10:38.167830] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f6cc8 00:29:24.678 [2024-07-15 10:10:38.168651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.678 [2024-07-15 10:10:38.168698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:24.678 [2024-07-15 10:10:38.176609] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f92c0 00:29:24.678 [2024-07-15 10:10:38.177637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.678 [2024-07-15 10:10:38.177676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:24.678 [2024-07-15 10:10:38.185426] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190ec408 00:29:24.678 [2024-07-15 10:10:38.186263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:9203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.678 [2024-07-15 10:10:38.186292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:24.678 [2024-07-15 10:10:38.193946] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190e49b0 00:29:24.678 [2024-07-15 10:10:38.194644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:15500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.678 [2024-07-15 10:10:38.194678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:24.678 [2024-07-15 10:10:38.202860] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f7da8 00:29:24.678 [2024-07-15 10:10:38.203320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.678 [2024-07-15 10:10:38.203346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:24.678 [2024-07-15 10:10:38.212194] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f7538 00:29:24.678 [2024-07-15 10:10:38.212816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:3651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.678 [2024-07-15 10:10:38.212843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:24.678 [2024-07-15 10:10:38.221503] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f0bc0 00:29:24.678 [2024-07-15 10:10:38.222237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.678 [2024-07-15 10:10:38.222267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:24.678 [2024-07-15 10:10:38.230271] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190e88f8 00:29:24.678 [2024-07-15 10:10:38.231332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.678 [2024-07-15 10:10:38.231362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:24.679 [2024-07-15 10:10:38.239378] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190de038 00:29:24.679 [2024-07-15 10:10:38.240331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:24206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.679 [2024-07-15 10:10:38.240360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:24.679 [2024-07-15 10:10:38.247674] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f2d80 00:29:24.679 [2024-07-15 10:10:38.248404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:12160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.679 [2024-07-15 10:10:38.248430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:24.679 [2024-07-15 10:10:38.256257] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190fd640 00:29:24.679 [2024-07-15 10:10:38.256938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.679 [2024-07-15 10:10:38.256968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:24.937 [2024-07-15 10:10:38.267933] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f20d8 00:29:24.937 [2024-07-15 10:10:38.269291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.937 [2024-07-15 10:10:38.269326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:24.937 [2024-07-15 10:10:38.276137] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190ff3c8 00:29:24.937 [2024-07-15 10:10:38.277704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.937 [2024-07-15 10:10:38.277733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:24.937 [2024-07-15 10:10:38.284237] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f5378 00:29:24.937 [2024-07-15 10:10:38.284919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:7886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.937 [2024-07-15 10:10:38.284950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:24.937 [2024-07-15 10:10:38.295018] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190e8088 00:29:24.937 [2024-07-15 10:10:38.296138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:20803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.937 [2024-07-15 10:10:38.296165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:24.937 [2024-07-15 10:10:38.303430] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190ebfd0 00:29:24.937 [2024-07-15 10:10:38.304333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:21752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.937 [2024-07-15 10:10:38.304360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:24.937 [2024-07-15 10:10:38.312287] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190ddc00 00:29:24.937 [2024-07-15 10:10:38.313281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:22033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.937 [2024-07-15 10:10:38.313310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:24.937 [2024-07-15 10:10:38.323439] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190e2c28 00:29:24.937 [2024-07-15 10:10:38.324874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:18694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.937 [2024-07-15 10:10:38.324900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:24.937 [2024-07-15 10:10:38.329892] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190eaef0 00:29:24.937 [2024-07-15 10:10:38.330633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.937 [2024-07-15 10:10:38.330667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:24.937 [2024-07-15 10:10:38.339628] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190ea680 00:29:24.937 [2024-07-15 10:10:38.340479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.937 [2024-07-15 10:10:38.340510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:24.937 [2024-07-15 10:10:38.348946] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190ff3c8 00:29:24.937 [2024-07-15 10:10:38.349821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.937 [2024-07-15 10:10:38.349851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:24.937 [2024-07-15 10:10:38.357543] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190dfdc0 00:29:24.937 [2024-07-15 10:10:38.358299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.937 [2024-07-15 10:10:38.358327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.937 [2024-07-15 10:10:38.368287] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f9f68 00:29:24.937 [2024-07-15 10:10:38.369599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.937 [2024-07-15 10:10:38.369630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:24.937 [2024-07-15 10:10:38.377564] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f6890 00:29:24.937 [2024-07-15 10:10:38.378757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.937 [2024-07-15 10:10:38.378786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:24.937 [2024-07-15 10:10:38.384880] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f5be8 00:29:24.937 [2024-07-15 10:10:38.385600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.937 [2024-07-15 10:10:38.385628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:24.937 [2024-07-15 10:10:38.393891] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190ea248 00:29:24.937 [2024-07-15 10:10:38.394496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.937 [2024-07-15 10:10:38.394528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:24.937 [2024-07-15 10:10:38.406148] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f9f68 00:29:24.937 [2024-07-15 10:10:38.407449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.937 [2024-07-15 10:10:38.407485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:24.937 [2024-07-15 10:10:38.415929] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f6890 00:29:24.937 [2024-07-15 10:10:38.417261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.937 [2024-07-15 10:10:38.417296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:24.937 [2024-07-15 10:10:38.423713] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f35f0 00:29:24.937 [2024-07-15 10:10:38.424543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.937 [2024-07-15 10:10:38.424573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:24.937 [2024-07-15 10:10:38.434093] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f4298 00:29:24.937 [2024-07-15 10:10:38.434960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.937 [2024-07-15 10:10:38.434991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:24.937 [2024-07-15 10:10:38.444345] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190ea248 00:29:24.937 [2024-07-15 10:10:38.445021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:17413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.937 [2024-07-15 10:10:38.445055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:24.937 [2024-07-15 10:10:38.454500] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190ea248 00:29:24.938 [2024-07-15 10:10:38.455066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.938 [2024-07-15 10:10:38.455096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:24.938 [2024-07-15 10:10:38.466869] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f2510 00:29:24.938 [2024-07-15 10:10:38.468149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:15167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.938 [2024-07-15 10:10:38.468188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:24.938 [2024-07-15 10:10:38.476867] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f4b08 00:29:24.938 [2024-07-15 10:10:38.477976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:15592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.938 [2024-07-15 10:10:38.478012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:24.938 [2024-07-15 10:10:38.486214] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f7538 00:29:24.938 [2024-07-15 10:10:38.487111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.938 [2024-07-15 10:10:38.487145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:24.938 [2024-07-15 10:10:38.495169] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f6890 00:29:24.938 [2024-07-15 10:10:38.495949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.938 [2024-07-15 10:10:38.495981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:24.938 [2024-07-15 10:10:38.505129] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190fa3a0 00:29:24.938 [2024-07-15 10:10:38.505654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.938 [2024-07-15 10:10:38.505691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:24.938 [2024-07-15 10:10:38.517453] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190ef270 00:29:24.938 [2024-07-15 10:10:38.519229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.938 [2024-07-15 10:10:38.519265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:25.197 [2024-07-15 10:10:38.525029] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190fef90 00:29:25.197 [2024-07-15 10:10:38.525908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.197 [2024-07-15 10:10:38.525942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:25.197 [2024-07-15 10:10:38.535143] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f3a28 00:29:25.197 [2024-07-15 10:10:38.535924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:16770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.197 [2024-07-15 10:10:38.535955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:25.197 [2024-07-15 10:10:38.544766] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190e01f8 00:29:25.197 [2024-07-15 10:10:38.545328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:10531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.197 [2024-07-15 10:10:38.545357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:25.197 [2024-07-15 10:10:38.555707] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190e0a68 00:29:25.197 [2024-07-15 10:10:38.557194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:25404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.197 [2024-07-15 10:10:38.557228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:25.197 [2024-07-15 10:10:38.562414] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f8a50 00:29:25.197 [2024-07-15 10:10:38.563038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:18315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.197 [2024-07-15 10:10:38.563068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:25.197 [2024-07-15 10:10:38.573519] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190e7818 00:29:25.197 [2024-07-15 10:10:38.574643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:12075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.197 [2024-07-15 10:10:38.574678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:25.197 [2024-07-15 10:10:38.583149] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190e4140 00:29:25.197 [2024-07-15 10:10:38.584509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.197 [2024-07-15 10:10:38.584539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:25.197 [2024-07-15 10:10:38.592534] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f0350 00:29:25.197 [2024-07-15 10:10:38.593948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:23335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.197 [2024-07-15 10:10:38.593979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:25.197 [2024-07-15 10:10:38.600281] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f0350 00:29:25.197 [2024-07-15 10:10:38.601229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:25425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.197 [2024-07-15 10:10:38.601258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:25.197 [2024-07-15 10:10:38.611603] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f7da8 00:29:25.197 [2024-07-15 10:10:38.613151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.197 [2024-07-15 10:10:38.613183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:25.197 [2024-07-15 10:10:38.621240] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190de038 00:29:25.197 [2024-07-15 10:10:38.622744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.197 [2024-07-15 10:10:38.622773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.197 [2024-07-15 10:10:38.630560] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f8618 00:29:25.198 [2024-07-15 10:10:38.632108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:6519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.198 [2024-07-15 10:10:38.632138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:25.198 [2024-07-15 10:10:38.640561] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f20d8 00:29:25.198 [2024-07-15 10:10:38.642238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.198 [2024-07-15 10:10:38.642265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:25.198 [2024-07-15 10:10:38.647356] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190ddc00 00:29:25.198 [2024-07-15 10:10:38.648026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:13845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.198 [2024-07-15 10:10:38.648053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:25.198 [2024-07-15 10:10:38.657974] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190fcdd0 00:29:25.198 [2024-07-15 10:10:38.659142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:10274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.198 [2024-07-15 10:10:38.659172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:25.198 [2024-07-15 10:10:38.667164] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190feb58 00:29:25.198 [2024-07-15 10:10:38.667951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:10805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.198 [2024-07-15 10:10:38.667979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:25.198 [2024-07-15 10:10:38.676159] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190de038 00:29:25.198 [2024-07-15 10:10:38.677239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.198 [2024-07-15 10:10:38.677268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:25.198 [2024-07-15 10:10:38.685010] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190e6738 00:29:25.198 [2024-07-15 10:10:38.686059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.198 [2024-07-15 10:10:38.686083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:25.198 [2024-07-15 10:10:38.694636] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f5be8 00:29:25.198 [2024-07-15 10:10:38.695868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.198 [2024-07-15 10:10:38.695895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:25.198 [2024-07-15 10:10:38.703301] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190e3060 00:29:25.198 [2024-07-15 10:10:38.704309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.198 [2024-07-15 10:10:38.704342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:25.198 [2024-07-15 10:10:38.712686] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190e0ea0 00:29:25.198 [2024-07-15 10:10:38.713515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.198 [2024-07-15 10:10:38.713547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:25.198 [2024-07-15 10:10:38.722727] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190e95a0 00:29:25.198 [2024-07-15 10:10:38.723966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:21370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.198 [2024-07-15 10:10:38.723996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:25.198 [2024-07-15 10:10:38.731678] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190ef270 00:29:25.198 [2024-07-15 10:10:38.732879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:3885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.198 [2024-07-15 10:10:38.732910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:25.198 [2024-07-15 10:10:38.740702] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190fef90 00:29:25.198 [2024-07-15 10:10:38.741670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:18307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.198 [2024-07-15 10:10:38.741705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:25.198 [2024-07-15 10:10:38.750619] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190eee38 00:29:25.198 [2024-07-15 10:10:38.751335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:18953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.198 [2024-07-15 10:10:38.751364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:25.198 [2024-07-15 10:10:38.760454] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190e27f0 00:29:25.198 [2024-07-15 10:10:38.761291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.198 [2024-07-15 10:10:38.761319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:25.198 [2024-07-15 10:10:38.769241] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f8e88 00:29:25.198 [2024-07-15 10:10:38.769965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:11346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.198 [2024-07-15 10:10:38.769993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:25.198 [2024-07-15 10:10:38.778016] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190ff3c8 00:29:25.198 [2024-07-15 10:10:38.778555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.198 [2024-07-15 10:10:38.778578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:25.457 [2024-07-15 10:10:38.788660] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f7538 00:29:25.457 [2024-07-15 10:10:38.789918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.457 [2024-07-15 10:10:38.789949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.457 [2024-07-15 10:10:38.797707] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f5378 00:29:25.457 [2024-07-15 10:10:38.798829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.457 [2024-07-15 10:10:38.798859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:25.457 [2024-07-15 10:10:38.806754] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190fc560 00:29:25.457 [2024-07-15 10:10:38.807573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.457 [2024-07-15 10:10:38.807603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:25.457 [2024-07-15 10:10:38.815581] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f20d8 00:29:25.457 [2024-07-15 10:10:38.816637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:14607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.457 [2024-07-15 10:10:38.816675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:25.457 [2024-07-15 10:10:38.824575] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190eaab8 00:29:25.457 [2024-07-15 10:10:38.825575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.457 [2024-07-15 10:10:38.825604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:25.457 [2024-07-15 10:10:38.833403] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f6458 00:29:25.457 [2024-07-15 10:10:38.834237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.457 [2024-07-15 10:10:38.834266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:25.457 [2024-07-15 10:10:38.842495] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f46d0 00:29:25.457 [2024-07-15 10:10:38.843399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:25237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.457 [2024-07-15 10:10:38.843424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:25.457 [2024-07-15 10:10:38.851665] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f8a50 00:29:25.457 [2024-07-15 10:10:38.852235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.457 [2024-07-15 10:10:38.852263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:25.457 [2024-07-15 10:10:38.862982] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190e5a90 00:29:25.458 [2024-07-15 10:10:38.864676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:25357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.458 [2024-07-15 10:10:38.864708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.458 [2024-07-15 10:10:38.869706] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190fa7d8 00:29:25.458 [2024-07-15 10:10:38.870407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:19874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.458 [2024-07-15 10:10:38.870434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:25.458 [2024-07-15 10:10:38.880303] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f0bc0 00:29:25.458 [2024-07-15 10:10:38.881595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:20825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.458 [2024-07-15 10:10:38.881630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:25.458 [2024-07-15 10:10:38.889279] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190e01f8 00:29:25.458 [2024-07-15 10:10:38.890321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.458 [2024-07-15 10:10:38.890354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:25.458 [2024-07-15 10:10:38.898573] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f0bc0 00:29:25.458 [2024-07-15 10:10:38.899608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.458 [2024-07-15 10:10:38.899638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:25.458 [2024-07-15 10:10:38.908556] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190ea680 00:29:25.458 [2024-07-15 10:10:38.909823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.458 [2024-07-15 10:10:38.909853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:25.458 [2024-07-15 10:10:38.918354] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f2948 00:29:25.458 [2024-07-15 10:10:38.919506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:15861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.458 [2024-07-15 10:10:38.919535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:25.458 [2024-07-15 10:10:38.927294] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190eaef0 00:29:25.458 [2024-07-15 10:10:38.928338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.458 [2024-07-15 10:10:38.928366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:25.458 [2024-07-15 10:10:38.938366] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190ed4e8 00:29:25.458 [2024-07-15 10:10:38.939837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.458 [2024-07-15 10:10:38.939865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:25.458 [2024-07-15 10:10:38.944944] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190e1b48 00:29:25.458 [2024-07-15 10:10:38.945547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:25052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.458 [2024-07-15 10:10:38.945568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:25.458 [2024-07-15 10:10:38.953680] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f1430 00:29:25.458 [2024-07-15 10:10:38.954254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.458 [2024-07-15 10:10:38.954279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:25.458 [2024-07-15 10:10:38.963397] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f6020 00:29:25.458 [2024-07-15 10:10:38.964006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:25550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.458 [2024-07-15 10:10:38.964050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:25.458 [2024-07-15 10:10:38.974228] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f0350 00:29:25.458 [2024-07-15 10:10:38.975245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:17518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.458 [2024-07-15 10:10:38.975276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:25.458 [2024-07-15 10:10:38.983160] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190e8d30 00:29:25.458 [2024-07-15 10:10:38.984031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:23856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.458 [2024-07-15 10:10:38.984060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:25.458 [2024-07-15 10:10:38.993377] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190ff3c8 00:29:25.458 [2024-07-15 10:10:38.994535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.458 [2024-07-15 10:10:38.994569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:25.458 [2024-07-15 10:10:39.002588] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190e0a68 00:29:25.458 [2024-07-15 10:10:39.003472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.458 [2024-07-15 10:10:39.003505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:25.458 [2024-07-15 10:10:39.011704] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190e0630 00:29:25.458 [2024-07-15 10:10:39.012593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:23626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.458 [2024-07-15 10:10:39.012621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:25.458 [2024-07-15 10:10:39.023121] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f96f8 00:29:25.458 [2024-07-15 10:10:39.024677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.458 [2024-07-15 10:10:39.024708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:25.458 [2024-07-15 10:10:39.032963] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f57b0 00:29:25.458 [2024-07-15 10:10:39.034511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.458 [2024-07-15 10:10:39.034542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:25.718 [2024-07-15 10:10:39.040867] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f7538 00:29:25.718 [2024-07-15 10:10:39.041871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.718 [2024-07-15 10:10:39.041903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:25.718 [2024-07-15 10:10:39.050079] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190e4de8 00:29:25.718 [2024-07-15 10:10:39.050815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:10552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.718 [2024-07-15 10:10:39.050844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:25.718 [2024-07-15 10:10:39.058973] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190edd58 00:29:25.718 [2024-07-15 10:10:39.059578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.718 [2024-07-15 10:10:39.059605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:25.718 [2024-07-15 10:10:39.070190] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f57b0 00:29:25.718 [2024-07-15 10:10:39.070958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.718 [2024-07-15 10:10:39.070992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:25.718 [2024-07-15 10:10:39.079808] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f2948 00:29:25.718 [2024-07-15 10:10:39.080534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.718 [2024-07-15 10:10:39.080570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:25.718 [2024-07-15 10:10:39.089607] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f1430 00:29:25.718 [2024-07-15 10:10:39.090167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:8042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.718 [2024-07-15 10:10:39.090199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:25.718 [2024-07-15 10:10:39.101138] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190e5a90 00:29:25.718 [2024-07-15 10:10:39.102391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.718 [2024-07-15 10:10:39.102420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:25.718 [2024-07-15 10:10:39.110766] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190eea00 00:29:25.718 [2024-07-15 10:10:39.112095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.718 [2024-07-15 10:10:39.112124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:25.718 [2024-07-15 10:10:39.119478] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f0ff8 00:29:25.718 [2024-07-15 10:10:39.120592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.718 [2024-07-15 10:10:39.120622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:25.718 [2024-07-15 10:10:39.129084] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190e84c0 00:29:25.718 [2024-07-15 10:10:39.130275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.718 [2024-07-15 10:10:39.130304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:25.718 [2024-07-15 10:10:39.139246] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f5be8 00:29:25.718 [2024-07-15 10:10:39.139956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:24022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.718 [2024-07-15 10:10:39.139978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:25.718 [2024-07-15 10:10:39.148975] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190e8d30 00:29:25.718 [2024-07-15 10:10:39.150046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.718 [2024-07-15 10:10:39.150076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:25.718 [2024-07-15 10:10:39.158512] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190ef270 00:29:25.718 [2024-07-15 10:10:39.159381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.718 [2024-07-15 10:10:39.159411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:25.718 [2024-07-15 10:10:39.167781] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f9b30 00:29:25.718 [2024-07-15 10:10:39.168483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.718 [2024-07-15 10:10:39.168514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:25.718 [2024-07-15 10:10:39.179722] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f8e88 00:29:25.718 [2024-07-15 10:10:39.181383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:6907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.718 [2024-07-15 10:10:39.181421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:25.718 [2024-07-15 10:10:39.187009] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190fbcf0 00:29:25.718 [2024-07-15 10:10:39.187861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:17951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.718 [2024-07-15 10:10:39.187891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:25.718 [2024-07-15 10:10:39.196737] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f6458 00:29:25.718 [2024-07-15 10:10:39.197572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.718 [2024-07-15 10:10:39.197601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:25.718 [2024-07-15 10:10:39.206778] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190fac10 00:29:25.718 [2024-07-15 10:10:39.207348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.718 [2024-07-15 10:10:39.207389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:25.718 [2024-07-15 10:10:39.218698] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190e88f8 00:29:25.718 [2024-07-15 10:10:39.220437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.718 [2024-07-15 10:10:39.220468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:25.718 [2024-07-15 10:10:39.227429] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190df550 00:29:25.718 [2024-07-15 10:10:39.228579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:14288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.718 [2024-07-15 10:10:39.228609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:25.718 [2024-07-15 10:10:39.236782] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190fdeb0 00:29:25.718 [2024-07-15 10:10:39.237888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:9204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.718 [2024-07-15 10:10:39.237919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:25.718 [2024-07-15 10:10:39.246688] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190fac10 00:29:25.718 [2024-07-15 10:10:39.247861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:14967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.718 [2024-07-15 10:10:39.247895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:25.718 [2024-07-15 10:10:39.256431] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190e1710 00:29:25.718 [2024-07-15 10:10:39.257186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:20550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.718 [2024-07-15 10:10:39.257220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:25.718 [2024-07-15 10:10:39.266034] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190eb328 00:29:25.718 [2024-07-15 10:10:39.267071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.718 [2024-07-15 10:10:39.267101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:25.718 [2024-07-15 10:10:39.275452] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190ea248 00:29:25.718 [2024-07-15 10:10:39.276321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.718 [2024-07-15 10:10:39.276351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:25.718 [2024-07-15 10:10:39.286491] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190fa3a0 00:29:25.718 [2024-07-15 10:10:39.288082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:17301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.718 [2024-07-15 10:10:39.288113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:25.718 [2024-07-15 10:10:39.293316] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f9b30 00:29:25.718 [2024-07-15 10:10:39.293864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.719 [2024-07-15 10:10:39.293885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:25.977 [2024-07-15 10:10:39.304924] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190efae0 00:29:25.977 [2024-07-15 10:10:39.306340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.977 [2024-07-15 10:10:39.306373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:25.977 [2024-07-15 10:10:39.314140] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190e0a68 00:29:25.977 [2024-07-15 10:10:39.315523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.977 [2024-07-15 10:10:39.315555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:25.977 [2024-07-15 10:10:39.323677] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190e49b0 00:29:25.977 [2024-07-15 10:10:39.325199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.978 [2024-07-15 10:10:39.325228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:25.978 [2024-07-15 10:10:39.330225] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190e1f80 00:29:25.978 [2024-07-15 10:10:39.330779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.978 [2024-07-15 10:10:39.330815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:25.978 [2024-07-15 10:10:39.341702] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190e73e0 00:29:25.978 [2024-07-15 10:10:39.343027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.978 [2024-07-15 10:10:39.343057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:25.978 [2024-07-15 10:10:39.348220] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190ee5c8 00:29:25.978 [2024-07-15 10:10:39.348964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.978 [2024-07-15 10:10:39.348992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:25.978 [2024-07-15 10:10:39.359073] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190e7c50 00:29:25.978 [2024-07-15 10:10:39.360322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.978 [2024-07-15 10:10:39.360353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:25.978 [2024-07-15 10:10:39.368337] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190df550 00:29:25.978 [2024-07-15 10:10:39.369631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.978 [2024-07-15 10:10:39.369671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:25.978 [2024-07-15 10:10:39.376955] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190df988 00:29:25.978 [2024-07-15 10:10:39.378110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.978 [2024-07-15 10:10:39.378137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:25.978 [2024-07-15 10:10:39.385810] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190ea248 00:29:25.978 [2024-07-15 10:10:39.386899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:10755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.978 [2024-07-15 10:10:39.386928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:25.978 [2024-07-15 10:10:39.394745] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190e01f8 00:29:25.978 [2024-07-15 10:10:39.395431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:18088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.978 [2024-07-15 10:10:39.395459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:25.978 [2024-07-15 10:10:39.403518] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190ed4e8 00:29:25.978 [2024-07-15 10:10:39.404527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:24873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.978 [2024-07-15 10:10:39.404556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:25.978 [2024-07-15 10:10:39.412507] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190f31b8 00:29:25.978 [2024-07-15 10:10:39.413554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:18019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.978 [2024-07-15 10:10:39.413582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:25.978 [2024-07-15 10:10:39.421824] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190e3498 00:29:25.978 [2024-07-15 10:10:39.422855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.978 [2024-07-15 10:10:39.422887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:25.978 [2024-07-15 10:10:39.430732] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01880) with pdu=0x2000190e0630 00:29:25.978 [2024-07-15 10:10:39.431640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.978 [2024-07-15 10:10:39.431682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:25.978 00:29:25.978 Latency(us) 00:29:25.978 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:25.978 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:25.978 nvme0n1 : 2.00 26348.60 102.92 0.00 0.00 4852.99 1860.19 14366.41 00:29:25.978 =================================================================================================================== 00:29:25.978 Total : 26348.60 102.92 0.00 0.00 4852.99 1860.19 14366.41 00:29:25.978 0 00:29:25.978 10:10:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:25.978 10:10:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:25.978 10:10:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:25.978 10:10:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:25.978 | .driver_specific 00:29:25.978 | .nvme_error 00:29:25.978 | .status_code 00:29:25.978 | .command_transient_transport_error' 00:29:26.237 10:10:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 206 > 0 )) 00:29:26.237 10:10:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93640 00:29:26.237 10:10:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 93640 ']' 00:29:26.237 10:10:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 93640 00:29:26.237 10:10:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:26.237 10:10:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:26.237 10:10:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93640 00:29:26.237 10:10:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:26.237 killing process with pid 93640 00:29:26.237 Received shutdown signal, test time was about 2.000000 seconds 00:29:26.237 00:29:26.237 Latency(us) 00:29:26.237 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:26.237 =================================================================================================================== 00:29:26.237 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:26.237 10:10:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:26.237 10:10:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93640' 00:29:26.237 10:10:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 93640 00:29:26.237 10:10:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 93640 00:29:26.497 10:10:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:29:26.497 10:10:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:26.497 10:10:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:26.497 10:10:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:26.497 10:10:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:26.497 10:10:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93725 00:29:26.497 10:10:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93725 /var/tmp/bperf.sock 00:29:26.497 10:10:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:26.497 10:10:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 93725 ']' 00:29:26.497 10:10:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:26.497 10:10:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:26.497 10:10:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:26.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:26.497 10:10:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:26.497 10:10:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:26.497 [2024-07-15 10:10:39.971325] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:29:26.497 [2024-07-15 10:10:39.971485] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6I/O size of 131072 is greater than zero copy threshold (65536). 00:29:26.497 Zero copy mechanism will not be used. 00:29:26.497 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93725 ] 00:29:26.767 [2024-07-15 10:10:40.110168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:26.767 [2024-07-15 10:10:40.218532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:27.351 10:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:27.351 10:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:29:27.351 10:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:27.351 10:10:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:27.610 10:10:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:27.610 10:10:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.610 10:10:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:27.610 10:10:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.610 10:10:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:27.610 10:10:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:27.871 nvme0n1 00:29:27.871 10:10:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:27.871 10:10:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.871 10:10:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:27.871 10:10:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.871 10:10:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:27.871 10:10:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:27.871 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:27.871 Zero copy mechanism will not be used. 00:29:27.871 Running I/O for 2 seconds... 00:29:27.871 [2024-07-15 10:10:41.406886] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:27.871 [2024-07-15 10:10:41.407403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.871 [2024-07-15 10:10:41.407429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.871 [2024-07-15 10:10:41.410797] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:27.871 [2024-07-15 10:10:41.411243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.871 [2024-07-15 10:10:41.411276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.871 [2024-07-15 10:10:41.414824] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:27.871 [2024-07-15 10:10:41.415264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.871 [2024-07-15 10:10:41.415297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.871 [2024-07-15 10:10:41.418709] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:27.871 [2024-07-15 10:10:41.419161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.871 [2024-07-15 10:10:41.419193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.871 [2024-07-15 10:10:41.422686] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:27.871 [2024-07-15 10:10:41.423114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.871 [2024-07-15 10:10:41.423136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.871 [2024-07-15 10:10:41.426560] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:27.871 [2024-07-15 10:10:41.426996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.871 [2024-07-15 10:10:41.427043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.871 [2024-07-15 10:10:41.430422] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:27.871 [2024-07-15 10:10:41.430830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.871 [2024-07-15 10:10:41.430848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.871 [2024-07-15 10:10:41.434285] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:27.871 [2024-07-15 10:10:41.434685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.871 [2024-07-15 10:10:41.434706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.871 [2024-07-15 10:10:41.438095] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:27.871 [2024-07-15 10:10:41.438525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.871 [2024-07-15 10:10:41.438545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.871 [2024-07-15 10:10:41.442102] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:27.871 [2024-07-15 10:10:41.442555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.871 [2024-07-15 10:10:41.442586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.871 [2024-07-15 10:10:41.446034] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:27.871 [2024-07-15 10:10:41.446466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.871 [2024-07-15 10:10:41.446496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.871 [2024-07-15 10:10:41.450253] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:27.871 [2024-07-15 10:10:41.450678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.871 [2024-07-15 10:10:41.450702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.132 [2024-07-15 10:10:41.454445] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.133 [2024-07-15 10:10:41.454926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.133 [2024-07-15 10:10:41.454955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.133 [2024-07-15 10:10:41.458643] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.133 [2024-07-15 10:10:41.459114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.133 [2024-07-15 10:10:41.459149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.133 [2024-07-15 10:10:41.462787] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.133 [2024-07-15 10:10:41.463240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.133 [2024-07-15 10:10:41.463274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.133 [2024-07-15 10:10:41.466908] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.133 [2024-07-15 10:10:41.467354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.133 [2024-07-15 10:10:41.467387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.133 [2024-07-15 10:10:41.470988] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.133 [2024-07-15 10:10:41.471432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.133 [2024-07-15 10:10:41.471464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.133 [2024-07-15 10:10:41.475082] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.133 [2024-07-15 10:10:41.475506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.133 [2024-07-15 10:10:41.475535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.133 [2024-07-15 10:10:41.479136] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.133 [2024-07-15 10:10:41.479572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.133 [2024-07-15 10:10:41.479602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.133 [2024-07-15 10:10:41.483170] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.133 [2024-07-15 10:10:41.483595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.133 [2024-07-15 10:10:41.483624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.133 [2024-07-15 10:10:41.487181] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.133 [2024-07-15 10:10:41.487614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.133 [2024-07-15 10:10:41.487645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.133 [2024-07-15 10:10:41.491140] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.133 [2024-07-15 10:10:41.491569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.133 [2024-07-15 10:10:41.491600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.133 [2024-07-15 10:10:41.495160] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.133 [2024-07-15 10:10:41.495611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.133 [2024-07-15 10:10:41.495642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.133 [2024-07-15 10:10:41.499014] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.133 [2024-07-15 10:10:41.499465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.133 [2024-07-15 10:10:41.499498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.133 [2024-07-15 10:10:41.503065] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.133 [2024-07-15 10:10:41.503489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.133 [2024-07-15 10:10:41.503514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.133 [2024-07-15 10:10:41.507156] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.133 [2024-07-15 10:10:41.507624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.133 [2024-07-15 10:10:41.507671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.133 [2024-07-15 10:10:41.511337] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.133 [2024-07-15 10:10:41.511779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.133 [2024-07-15 10:10:41.511802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.133 [2024-07-15 10:10:41.515471] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.133 [2024-07-15 10:10:41.515939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.133 [2024-07-15 10:10:41.515973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.133 [2024-07-15 10:10:41.519579] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.133 [2024-07-15 10:10:41.519991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.133 [2024-07-15 10:10:41.520038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.133 [2024-07-15 10:10:41.523638] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.133 [2024-07-15 10:10:41.524112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.133 [2024-07-15 10:10:41.524144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.133 [2024-07-15 10:10:41.527782] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.133 [2024-07-15 10:10:41.528202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.133 [2024-07-15 10:10:41.528225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.133 [2024-07-15 10:10:41.531807] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.133 [2024-07-15 10:10:41.532243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.133 [2024-07-15 10:10:41.532273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.133 [2024-07-15 10:10:41.535925] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.133 [2024-07-15 10:10:41.536363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.133 [2024-07-15 10:10:41.536404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.133 [2024-07-15 10:10:41.540159] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.133 [2024-07-15 10:10:41.540622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.133 [2024-07-15 10:10:41.540646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.133 [2024-07-15 10:10:41.544405] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.133 [2024-07-15 10:10:41.544882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.133 [2024-07-15 10:10:41.544914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.133 [2024-07-15 10:10:41.548683] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.133 [2024-07-15 10:10:41.549114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.133 [2024-07-15 10:10:41.549146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.133 [2024-07-15 10:10:41.552878] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.133 [2024-07-15 10:10:41.553290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.133 [2024-07-15 10:10:41.553322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.133 [2024-07-15 10:10:41.557039] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.133 [2024-07-15 10:10:41.557501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.133 [2024-07-15 10:10:41.557536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.133 [2024-07-15 10:10:41.561372] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.134 [2024-07-15 10:10:41.561856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.134 [2024-07-15 10:10:41.561889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.134 [2024-07-15 10:10:41.565483] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.134 [2024-07-15 10:10:41.565904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.134 [2024-07-15 10:10:41.565926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.134 [2024-07-15 10:10:41.569477] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.134 [2024-07-15 10:10:41.569924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.134 [2024-07-15 10:10:41.569950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.134 [2024-07-15 10:10:41.573492] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.134 [2024-07-15 10:10:41.573944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.134 [2024-07-15 10:10:41.573962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.134 [2024-07-15 10:10:41.577414] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.134 [2024-07-15 10:10:41.577882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.134 [2024-07-15 10:10:41.577907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.134 [2024-07-15 10:10:41.581446] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.134 [2024-07-15 10:10:41.581891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.134 [2024-07-15 10:10:41.581923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.134 [2024-07-15 10:10:41.585524] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.134 [2024-07-15 10:10:41.585965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.134 [2024-07-15 10:10:41.585992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.134 [2024-07-15 10:10:41.589519] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.134 [2024-07-15 10:10:41.589992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.134 [2024-07-15 10:10:41.590019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.134 [2024-07-15 10:10:41.593859] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.134 [2024-07-15 10:10:41.594343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.134 [2024-07-15 10:10:41.594379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.134 [2024-07-15 10:10:41.598182] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.134 [2024-07-15 10:10:41.598616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.134 [2024-07-15 10:10:41.598641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.134 [2024-07-15 10:10:41.602171] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.134 [2024-07-15 10:10:41.602601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.134 [2024-07-15 10:10:41.602625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.134 [2024-07-15 10:10:41.606182] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.134 [2024-07-15 10:10:41.606636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.134 [2024-07-15 10:10:41.606683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.134 [2024-07-15 10:10:41.610135] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.134 [2024-07-15 10:10:41.610521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.134 [2024-07-15 10:10:41.610544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.134 [2024-07-15 10:10:41.613980] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.134 [2024-07-15 10:10:41.614429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.134 [2024-07-15 10:10:41.614460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.134 [2024-07-15 10:10:41.617927] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.134 [2024-07-15 10:10:41.618358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.134 [2024-07-15 10:10:41.618389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.134 [2024-07-15 10:10:41.621850] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.134 [2024-07-15 10:10:41.622300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.134 [2024-07-15 10:10:41.622322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.134 [2024-07-15 10:10:41.625729] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.134 [2024-07-15 10:10:41.626159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.134 [2024-07-15 10:10:41.626190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.134 [2024-07-15 10:10:41.629621] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.134 [2024-07-15 10:10:41.630103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.134 [2024-07-15 10:10:41.630133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.134 [2024-07-15 10:10:41.633563] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.134 [2024-07-15 10:10:41.634052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.134 [2024-07-15 10:10:41.634081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.134 [2024-07-15 10:10:41.637488] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.134 [2024-07-15 10:10:41.637939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.134 [2024-07-15 10:10:41.637965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.134 [2024-07-15 10:10:41.641480] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.134 [2024-07-15 10:10:41.641938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.134 [2024-07-15 10:10:41.641969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.134 [2024-07-15 10:10:41.645462] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.134 [2024-07-15 10:10:41.645909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.134 [2024-07-15 10:10:41.645948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.134 [2024-07-15 10:10:41.649376] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.134 [2024-07-15 10:10:41.649792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.134 [2024-07-15 10:10:41.649815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.134 [2024-07-15 10:10:41.653269] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.134 [2024-07-15 10:10:41.653718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.134 [2024-07-15 10:10:41.653739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.134 [2024-07-15 10:10:41.657145] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.134 [2024-07-15 10:10:41.657573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.134 [2024-07-15 10:10:41.657597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.134 [2024-07-15 10:10:41.661045] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.134 [2024-07-15 10:10:41.661465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.134 [2024-07-15 10:10:41.661487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.134 [2024-07-15 10:10:41.664928] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.134 [2024-07-15 10:10:41.665349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.134 [2024-07-15 10:10:41.665379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.134 [2024-07-15 10:10:41.668874] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.134 [2024-07-15 10:10:41.669292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.134 [2024-07-15 10:10:41.669312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.134 [2024-07-15 10:10:41.672719] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.134 [2024-07-15 10:10:41.673143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.134 [2024-07-15 10:10:41.673169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.134 [2024-07-15 10:10:41.676568] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.135 [2024-07-15 10:10:41.676982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.135 [2024-07-15 10:10:41.677003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.135 [2024-07-15 10:10:41.680430] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.135 [2024-07-15 10:10:41.680845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.135 [2024-07-15 10:10:41.680865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.135 [2024-07-15 10:10:41.684210] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.135 [2024-07-15 10:10:41.684657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.135 [2024-07-15 10:10:41.684686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.135 [2024-07-15 10:10:41.687997] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.135 [2024-07-15 10:10:41.688450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.135 [2024-07-15 10:10:41.688471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.135 [2024-07-15 10:10:41.691850] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.135 [2024-07-15 10:10:41.692298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.135 [2024-07-15 10:10:41.692322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.135 [2024-07-15 10:10:41.695875] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.135 [2024-07-15 10:10:41.696312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.135 [2024-07-15 10:10:41.696339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.135 [2024-07-15 10:10:41.699717] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.135 [2024-07-15 10:10:41.700119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.135 [2024-07-15 10:10:41.700146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.135 [2024-07-15 10:10:41.703745] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.135 [2024-07-15 10:10:41.704152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.135 [2024-07-15 10:10:41.704171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.135 [2024-07-15 10:10:41.707730] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.135 [2024-07-15 10:10:41.708190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.135 [2024-07-15 10:10:41.708218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.135 [2024-07-15 10:10:41.711989] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.135 [2024-07-15 10:10:41.712492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.135 [2024-07-15 10:10:41.712523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.396 [2024-07-15 10:10:41.716406] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.396 [2024-07-15 10:10:41.716849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.397 [2024-07-15 10:10:41.716869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.397 [2024-07-15 10:10:41.720579] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.397 [2024-07-15 10:10:41.721077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.397 [2024-07-15 10:10:41.721116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.397 [2024-07-15 10:10:41.725145] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.397 [2024-07-15 10:10:41.725625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.397 [2024-07-15 10:10:41.725655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.397 [2024-07-15 10:10:41.729476] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.397 [2024-07-15 10:10:41.729957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.397 [2024-07-15 10:10:41.729981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.397 [2024-07-15 10:10:41.733925] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.397 [2024-07-15 10:10:41.734351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.397 [2024-07-15 10:10:41.734376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.397 [2024-07-15 10:10:41.738262] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.397 [2024-07-15 10:10:41.738721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.397 [2024-07-15 10:10:41.738745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.397 [2024-07-15 10:10:41.742638] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.397 [2024-07-15 10:10:41.743079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.397 [2024-07-15 10:10:41.743102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.397 [2024-07-15 10:10:41.746709] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.397 [2024-07-15 10:10:41.747104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.397 [2024-07-15 10:10:41.747126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.397 [2024-07-15 10:10:41.750731] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.397 [2024-07-15 10:10:41.751138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.397 [2024-07-15 10:10:41.751165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.397 [2024-07-15 10:10:41.754698] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.397 [2024-07-15 10:10:41.755134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.397 [2024-07-15 10:10:41.755155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.397 [2024-07-15 10:10:41.758696] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.397 [2024-07-15 10:10:41.759120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.397 [2024-07-15 10:10:41.759142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.397 [2024-07-15 10:10:41.762886] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.397 [2024-07-15 10:10:41.763319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.397 [2024-07-15 10:10:41.763349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.397 [2024-07-15 10:10:41.767159] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.397 [2024-07-15 10:10:41.767595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.397 [2024-07-15 10:10:41.767619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.397 [2024-07-15 10:10:41.771330] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.397 [2024-07-15 10:10:41.771766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.397 [2024-07-15 10:10:41.771790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.397 [2024-07-15 10:10:41.775464] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.397 [2024-07-15 10:10:41.775874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.397 [2024-07-15 10:10:41.775896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.397 [2024-07-15 10:10:41.779644] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.397 [2024-07-15 10:10:41.780077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.397 [2024-07-15 10:10:41.780100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.397 [2024-07-15 10:10:41.783686] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.397 [2024-07-15 10:10:41.784102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.397 [2024-07-15 10:10:41.784124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.397 [2024-07-15 10:10:41.787740] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.397 [2024-07-15 10:10:41.788160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.397 [2024-07-15 10:10:41.788184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.397 [2024-07-15 10:10:41.791734] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.397 [2024-07-15 10:10:41.792183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.397 [2024-07-15 10:10:41.792208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.397 [2024-07-15 10:10:41.795725] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.397 [2024-07-15 10:10:41.796182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.397 [2024-07-15 10:10:41.796221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.397 [2024-07-15 10:10:41.799851] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.397 [2024-07-15 10:10:41.800277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.397 [2024-07-15 10:10:41.800299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.397 [2024-07-15 10:10:41.803859] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.397 [2024-07-15 10:10:41.804275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.397 [2024-07-15 10:10:41.804300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.397 [2024-07-15 10:10:41.807881] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.397 [2024-07-15 10:10:41.808336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.397 [2024-07-15 10:10:41.808364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.397 [2024-07-15 10:10:41.811851] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.397 [2024-07-15 10:10:41.812266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.397 [2024-07-15 10:10:41.812287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.397 [2024-07-15 10:10:41.815763] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.397 [2024-07-15 10:10:41.816177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.397 [2024-07-15 10:10:41.816199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.397 [2024-07-15 10:10:41.819695] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.397 [2024-07-15 10:10:41.820146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.397 [2024-07-15 10:10:41.820173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.397 [2024-07-15 10:10:41.823770] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.397 [2024-07-15 10:10:41.824186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.397 [2024-07-15 10:10:41.824207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.397 [2024-07-15 10:10:41.827898] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.397 [2024-07-15 10:10:41.828328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.397 [2024-07-15 10:10:41.828355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.397 [2024-07-15 10:10:41.831790] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.397 [2024-07-15 10:10:41.832221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.397 [2024-07-15 10:10:41.832242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.397 [2024-07-15 10:10:41.835619] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.397 [2024-07-15 10:10:41.836063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.397 [2024-07-15 10:10:41.836085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.398 [2024-07-15 10:10:41.839588] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.398 [2024-07-15 10:10:41.840015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.398 [2024-07-15 10:10:41.840036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.398 [2024-07-15 10:10:41.843446] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.398 [2024-07-15 10:10:41.843864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.398 [2024-07-15 10:10:41.843884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.398 [2024-07-15 10:10:41.847277] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.398 [2024-07-15 10:10:41.847707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.398 [2024-07-15 10:10:41.847729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.398 [2024-07-15 10:10:41.851333] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.398 [2024-07-15 10:10:41.851780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.398 [2024-07-15 10:10:41.851804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.398 [2024-07-15 10:10:41.855430] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.398 [2024-07-15 10:10:41.855877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.398 [2024-07-15 10:10:41.855901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.398 [2024-07-15 10:10:41.859456] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.398 [2024-07-15 10:10:41.859903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.398 [2024-07-15 10:10:41.859926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.398 [2024-07-15 10:10:41.863398] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.398 [2024-07-15 10:10:41.863825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.398 [2024-07-15 10:10:41.863847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.398 [2024-07-15 10:10:41.867355] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.398 [2024-07-15 10:10:41.867778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.398 [2024-07-15 10:10:41.867799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.398 [2024-07-15 10:10:41.871317] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.398 [2024-07-15 10:10:41.871721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.398 [2024-07-15 10:10:41.871739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.398 [2024-07-15 10:10:41.875069] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.398 [2024-07-15 10:10:41.875492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.398 [2024-07-15 10:10:41.875516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.398 [2024-07-15 10:10:41.878851] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.398 [2024-07-15 10:10:41.879251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.398 [2024-07-15 10:10:41.879272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.398 [2024-07-15 10:10:41.882641] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.398 [2024-07-15 10:10:41.883068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.398 [2024-07-15 10:10:41.883090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.398 [2024-07-15 10:10:41.886496] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.398 [2024-07-15 10:10:41.886914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.398 [2024-07-15 10:10:41.886935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.398 [2024-07-15 10:10:41.890340] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.398 [2024-07-15 10:10:41.890768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.398 [2024-07-15 10:10:41.890790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.398 [2024-07-15 10:10:41.894197] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.398 [2024-07-15 10:10:41.894596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.398 [2024-07-15 10:10:41.894616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.398 [2024-07-15 10:10:41.898012] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.398 [2024-07-15 10:10:41.898427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.398 [2024-07-15 10:10:41.898447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.398 [2024-07-15 10:10:41.901919] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.398 [2024-07-15 10:10:41.902333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.398 [2024-07-15 10:10:41.902366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.398 [2024-07-15 10:10:41.905646] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.398 [2024-07-15 10:10:41.906068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.398 [2024-07-15 10:10:41.906088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.398 [2024-07-15 10:10:41.909461] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.398 [2024-07-15 10:10:41.909938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.398 [2024-07-15 10:10:41.909959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.398 [2024-07-15 10:10:41.913456] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.398 [2024-07-15 10:10:41.913925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.398 [2024-07-15 10:10:41.913960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.398 [2024-07-15 10:10:41.917290] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.398 [2024-07-15 10:10:41.917724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.398 [2024-07-15 10:10:41.917744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.398 [2024-07-15 10:10:41.921115] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.398 [2024-07-15 10:10:41.921541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.398 [2024-07-15 10:10:41.921566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.398 [2024-07-15 10:10:41.924973] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.398 [2024-07-15 10:10:41.925405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.398 [2024-07-15 10:10:41.925430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.398 [2024-07-15 10:10:41.928862] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.398 [2024-07-15 10:10:41.929265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.398 [2024-07-15 10:10:41.929286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.398 [2024-07-15 10:10:41.932703] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.398 [2024-07-15 10:10:41.933123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.398 [2024-07-15 10:10:41.933150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.398 [2024-07-15 10:10:41.936528] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.398 [2024-07-15 10:10:41.936963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.398 [2024-07-15 10:10:41.936983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.398 [2024-07-15 10:10:41.940336] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.398 [2024-07-15 10:10:41.940767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.398 [2024-07-15 10:10:41.940788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.398 [2024-07-15 10:10:41.944249] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.398 [2024-07-15 10:10:41.944709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.398 [2024-07-15 10:10:41.944730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.398 [2024-07-15 10:10:41.948186] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.398 [2024-07-15 10:10:41.948599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.398 [2024-07-15 10:10:41.948620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.399 [2024-07-15 10:10:41.952014] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.399 [2024-07-15 10:10:41.952477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.399 [2024-07-15 10:10:41.952500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.399 [2024-07-15 10:10:41.956080] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.399 [2024-07-15 10:10:41.956523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.399 [2024-07-15 10:10:41.956549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.399 [2024-07-15 10:10:41.959943] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.399 [2024-07-15 10:10:41.960393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.399 [2024-07-15 10:10:41.960414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.399 [2024-07-15 10:10:41.963818] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.399 [2024-07-15 10:10:41.964233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.399 [2024-07-15 10:10:41.964260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.399 [2024-07-15 10:10:41.967674] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.399 [2024-07-15 10:10:41.968125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.399 [2024-07-15 10:10:41.968146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.399 [2024-07-15 10:10:41.971452] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.399 [2024-07-15 10:10:41.971900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.399 [2024-07-15 10:10:41.971921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.399 [2024-07-15 10:10:41.975356] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.399 [2024-07-15 10:10:41.975777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.399 [2024-07-15 10:10:41.975799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.664 [2024-07-15 10:10:41.979320] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.664 [2024-07-15 10:10:41.979805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.664 [2024-07-15 10:10:41.979831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.664 [2024-07-15 10:10:41.983429] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.664 [2024-07-15 10:10:41.983869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.664 [2024-07-15 10:10:41.983892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.664 [2024-07-15 10:10:41.987435] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.664 [2024-07-15 10:10:41.987886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.664 [2024-07-15 10:10:41.987909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.664 [2024-07-15 10:10:41.991546] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.664 [2024-07-15 10:10:41.992002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.664 [2024-07-15 10:10:41.992024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.664 [2024-07-15 10:10:41.995579] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.665 [2024-07-15 10:10:41.996028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.665 [2024-07-15 10:10:41.996050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.665 [2024-07-15 10:10:41.999604] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.665 [2024-07-15 10:10:42.000060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.665 [2024-07-15 10:10:42.000082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.665 [2024-07-15 10:10:42.003484] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.665 [2024-07-15 10:10:42.003954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.665 [2024-07-15 10:10:42.003977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.665 [2024-07-15 10:10:42.007345] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.665 [2024-07-15 10:10:42.007799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.665 [2024-07-15 10:10:42.007820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.665 [2024-07-15 10:10:42.011120] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.665 [2024-07-15 10:10:42.011564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.665 [2024-07-15 10:10:42.011585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.665 [2024-07-15 10:10:42.015209] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.665 [2024-07-15 10:10:42.015669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.665 [2024-07-15 10:10:42.015703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.665 [2024-07-15 10:10:42.019206] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.665 [2024-07-15 10:10:42.019651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.665 [2024-07-15 10:10:42.019681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.665 [2024-07-15 10:10:42.023097] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.665 [2024-07-15 10:10:42.023505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.665 [2024-07-15 10:10:42.023526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.665 [2024-07-15 10:10:42.027158] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.665 [2024-07-15 10:10:42.027609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.665 [2024-07-15 10:10:42.027637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.665 [2024-07-15 10:10:42.030973] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.665 [2024-07-15 10:10:42.031397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.665 [2024-07-15 10:10:42.031418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.665 [2024-07-15 10:10:42.034777] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.665 [2024-07-15 10:10:42.035212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.665 [2024-07-15 10:10:42.035238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.665 [2024-07-15 10:10:42.038731] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.665 [2024-07-15 10:10:42.039151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.665 [2024-07-15 10:10:42.039172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.665 [2024-07-15 10:10:42.042586] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.665 [2024-07-15 10:10:42.043025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.665 [2024-07-15 10:10:42.043044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.665 [2024-07-15 10:10:42.046486] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.665 [2024-07-15 10:10:42.046937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.665 [2024-07-15 10:10:42.046958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.665 [2024-07-15 10:10:42.050478] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.665 [2024-07-15 10:10:42.050919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.665 [2024-07-15 10:10:42.050940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.665 [2024-07-15 10:10:42.054339] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.665 [2024-07-15 10:10:42.054757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.665 [2024-07-15 10:10:42.054776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.665 [2024-07-15 10:10:42.058179] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.665 [2024-07-15 10:10:42.058617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.665 [2024-07-15 10:10:42.058638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.665 [2024-07-15 10:10:42.062181] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.666 [2024-07-15 10:10:42.062606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.666 [2024-07-15 10:10:42.062627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.666 [2024-07-15 10:10:42.066120] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.666 [2024-07-15 10:10:42.066540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.666 [2024-07-15 10:10:42.066561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.666 [2024-07-15 10:10:42.069969] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.666 [2024-07-15 10:10:42.070458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.666 [2024-07-15 10:10:42.070490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.666 [2024-07-15 10:10:42.074212] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.666 [2024-07-15 10:10:42.074642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.666 [2024-07-15 10:10:42.074675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.666 [2024-07-15 10:10:42.078210] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.666 [2024-07-15 10:10:42.078610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.666 [2024-07-15 10:10:42.078633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.666 [2024-07-15 10:10:42.082350] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.666 [2024-07-15 10:10:42.082834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.666 [2024-07-15 10:10:42.082863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.666 [2024-07-15 10:10:42.086298] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.666 [2024-07-15 10:10:42.086729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.666 [2024-07-15 10:10:42.086750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.666 [2024-07-15 10:10:42.090164] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.666 [2024-07-15 10:10:42.090580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.666 [2024-07-15 10:10:42.090601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.666 [2024-07-15 10:10:42.094223] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.666 [2024-07-15 10:10:42.094628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.666 [2024-07-15 10:10:42.094650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.666 [2024-07-15 10:10:42.098097] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.666 [2024-07-15 10:10:42.098516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.666 [2024-07-15 10:10:42.098538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.666 [2024-07-15 10:10:42.102041] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.666 [2024-07-15 10:10:42.102453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.666 [2024-07-15 10:10:42.102475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.666 [2024-07-15 10:10:42.106092] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.666 [2024-07-15 10:10:42.106516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.666 [2024-07-15 10:10:42.106538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.666 [2024-07-15 10:10:42.109882] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.666 [2024-07-15 10:10:42.110312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.666 [2024-07-15 10:10:42.110345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.666 [2024-07-15 10:10:42.113997] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.666 [2024-07-15 10:10:42.114430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.666 [2024-07-15 10:10:42.114454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.666 [2024-07-15 10:10:42.117974] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.666 [2024-07-15 10:10:42.118407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.666 [2024-07-15 10:10:42.118435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.666 [2024-07-15 10:10:42.121889] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.666 [2024-07-15 10:10:42.122334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.666 [2024-07-15 10:10:42.122356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.666 [2024-07-15 10:10:42.126135] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.666 [2024-07-15 10:10:42.126567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.666 [2024-07-15 10:10:42.126595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.666 [2024-07-15 10:10:42.130138] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.666 [2024-07-15 10:10:42.130549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.666 [2024-07-15 10:10:42.130571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.666 [2024-07-15 10:10:42.134312] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.666 [2024-07-15 10:10:42.134741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.666 [2024-07-15 10:10:42.134761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.666 [2024-07-15 10:10:42.138319] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.666 [2024-07-15 10:10:42.138778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.666 [2024-07-15 10:10:42.138801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.666 [2024-07-15 10:10:42.142555] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.667 [2024-07-15 10:10:42.143009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.667 [2024-07-15 10:10:42.143031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.667 [2024-07-15 10:10:42.146766] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.667 [2024-07-15 10:10:42.147193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.667 [2024-07-15 10:10:42.147216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.667 [2024-07-15 10:10:42.150740] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.667 [2024-07-15 10:10:42.151192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.667 [2024-07-15 10:10:42.151228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.667 [2024-07-15 10:10:42.154825] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.667 [2024-07-15 10:10:42.155253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.667 [2024-07-15 10:10:42.155280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.667 [2024-07-15 10:10:42.158944] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.667 [2024-07-15 10:10:42.159371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.667 [2024-07-15 10:10:42.159398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.667 [2024-07-15 10:10:42.163133] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.667 [2024-07-15 10:10:42.163523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.667 [2024-07-15 10:10:42.163550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.667 [2024-07-15 10:10:42.167332] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.667 [2024-07-15 10:10:42.167785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.667 [2024-07-15 10:10:42.167813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.667 [2024-07-15 10:10:42.171626] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.667 [2024-07-15 10:10:42.172076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.667 [2024-07-15 10:10:42.172101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.667 [2024-07-15 10:10:42.175860] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.667 [2024-07-15 10:10:42.176284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.667 [2024-07-15 10:10:42.176318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.667 [2024-07-15 10:10:42.179967] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.667 [2024-07-15 10:10:42.180420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.667 [2024-07-15 10:10:42.180444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.667 [2024-07-15 10:10:42.184097] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.667 [2024-07-15 10:10:42.184565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.667 [2024-07-15 10:10:42.184589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.667 [2024-07-15 10:10:42.188159] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.667 [2024-07-15 10:10:42.188593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.667 [2024-07-15 10:10:42.188616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.667 [2024-07-15 10:10:42.192117] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.667 [2024-07-15 10:10:42.192535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.667 [2024-07-15 10:10:42.192559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.667 [2024-07-15 10:10:42.195994] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.667 [2024-07-15 10:10:42.196404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.667 [2024-07-15 10:10:42.196439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.667 [2024-07-15 10:10:42.199782] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.667 [2024-07-15 10:10:42.200203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.667 [2024-07-15 10:10:42.200224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.667 [2024-07-15 10:10:42.203799] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.667 [2024-07-15 10:10:42.204235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.667 [2024-07-15 10:10:42.204265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.667 [2024-07-15 10:10:42.207866] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.667 [2024-07-15 10:10:42.208289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.667 [2024-07-15 10:10:42.208310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.667 [2024-07-15 10:10:42.211838] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.667 [2024-07-15 10:10:42.212242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.667 [2024-07-15 10:10:42.212261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.667 [2024-07-15 10:10:42.215790] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.667 [2024-07-15 10:10:42.216220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.667 [2024-07-15 10:10:42.216241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.667 [2024-07-15 10:10:42.219593] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.667 [2024-07-15 10:10:42.220038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.667 [2024-07-15 10:10:42.220059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.667 [2024-07-15 10:10:42.223517] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.667 [2024-07-15 10:10:42.223940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.667 [2024-07-15 10:10:42.223961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.667 [2024-07-15 10:10:42.227325] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.667 [2024-07-15 10:10:42.227733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.667 [2024-07-15 10:10:42.227754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.667 [2024-07-15 10:10:42.231156] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.667 [2024-07-15 10:10:42.231581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.667 [2024-07-15 10:10:42.231602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.668 [2024-07-15 10:10:42.235025] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.668 [2024-07-15 10:10:42.235443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.668 [2024-07-15 10:10:42.235464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.668 [2024-07-15 10:10:42.238847] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.668 [2024-07-15 10:10:42.239271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.668 [2024-07-15 10:10:42.239299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.668 [2024-07-15 10:10:42.242928] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.668 [2024-07-15 10:10:42.243371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.668 [2024-07-15 10:10:42.243394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.930 [2024-07-15 10:10:42.247005] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.930 [2024-07-15 10:10:42.247443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.930 [2024-07-15 10:10:42.247472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.930 [2024-07-15 10:10:42.250925] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.930 [2024-07-15 10:10:42.251322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.930 [2024-07-15 10:10:42.251343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.930 [2024-07-15 10:10:42.254976] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.930 [2024-07-15 10:10:42.255397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.930 [2024-07-15 10:10:42.255426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.930 [2024-07-15 10:10:42.258949] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.930 [2024-07-15 10:10:42.259368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.930 [2024-07-15 10:10:42.259389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.930 [2024-07-15 10:10:42.262784] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.930 [2024-07-15 10:10:42.263205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.930 [2024-07-15 10:10:42.263228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.930 [2024-07-15 10:10:42.266636] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.930 [2024-07-15 10:10:42.267061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.930 [2024-07-15 10:10:42.267081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.930 [2024-07-15 10:10:42.270467] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.930 [2024-07-15 10:10:42.270944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.930 [2024-07-15 10:10:42.270988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.930 [2024-07-15 10:10:42.274322] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.930 [2024-07-15 10:10:42.274756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.930 [2024-07-15 10:10:42.274778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.930 [2024-07-15 10:10:42.278171] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.930 [2024-07-15 10:10:42.278569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.930 [2024-07-15 10:10:42.278590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.930 [2024-07-15 10:10:42.282119] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.930 [2024-07-15 10:10:42.282548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.930 [2024-07-15 10:10:42.282569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.930 [2024-07-15 10:10:42.285966] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.930 [2024-07-15 10:10:42.286401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.930 [2024-07-15 10:10:42.286428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.930 [2024-07-15 10:10:42.289851] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.930 [2024-07-15 10:10:42.290300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.930 [2024-07-15 10:10:42.290326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.930 [2024-07-15 10:10:42.293790] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.930 [2024-07-15 10:10:42.294194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.930 [2024-07-15 10:10:42.294213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.930 [2024-07-15 10:10:42.297682] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.930 [2024-07-15 10:10:42.298103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.930 [2024-07-15 10:10:42.298122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.930 [2024-07-15 10:10:42.301582] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.930 [2024-07-15 10:10:42.302017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.930 [2024-07-15 10:10:42.302062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.930 [2024-07-15 10:10:42.305451] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.930 [2024-07-15 10:10:42.305910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.930 [2024-07-15 10:10:42.305946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.930 [2024-07-15 10:10:42.309315] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.930 [2024-07-15 10:10:42.309810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.930 [2024-07-15 10:10:42.309830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.930 [2024-07-15 10:10:42.313434] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.930 [2024-07-15 10:10:42.313881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.930 [2024-07-15 10:10:42.313901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.930 [2024-07-15 10:10:42.317333] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.930 [2024-07-15 10:10:42.317771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.930 [2024-07-15 10:10:42.317791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.930 [2024-07-15 10:10:42.321236] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.930 [2024-07-15 10:10:42.321679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.930 [2024-07-15 10:10:42.321700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.930 [2024-07-15 10:10:42.325109] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.930 [2024-07-15 10:10:42.325536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.930 [2024-07-15 10:10:42.325555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.930 [2024-07-15 10:10:42.329038] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.931 [2024-07-15 10:10:42.329464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.931 [2024-07-15 10:10:42.329491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.931 [2024-07-15 10:10:42.332890] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.931 [2024-07-15 10:10:42.333320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.931 [2024-07-15 10:10:42.333342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.931 [2024-07-15 10:10:42.336700] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.931 [2024-07-15 10:10:42.337131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.931 [2024-07-15 10:10:42.337155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.931 [2024-07-15 10:10:42.340590] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.931 [2024-07-15 10:10:42.340992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.931 [2024-07-15 10:10:42.341013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.931 [2024-07-15 10:10:42.344419] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.931 [2024-07-15 10:10:42.344849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.931 [2024-07-15 10:10:42.344870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.931 [2024-07-15 10:10:42.348222] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.931 [2024-07-15 10:10:42.348669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.931 [2024-07-15 10:10:42.348699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.931 [2024-07-15 10:10:42.352066] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.931 [2024-07-15 10:10:42.352485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.931 [2024-07-15 10:10:42.352506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.931 [2024-07-15 10:10:42.355906] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.931 [2024-07-15 10:10:42.356314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.931 [2024-07-15 10:10:42.356333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.931 [2024-07-15 10:10:42.359690] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.931 [2024-07-15 10:10:42.360095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.931 [2024-07-15 10:10:42.360115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.931 [2024-07-15 10:10:42.363454] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.931 [2024-07-15 10:10:42.363880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.931 [2024-07-15 10:10:42.363901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.931 [2024-07-15 10:10:42.367368] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.931 [2024-07-15 10:10:42.367836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.931 [2024-07-15 10:10:42.367858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.931 [2024-07-15 10:10:42.371478] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.931 [2024-07-15 10:10:42.371934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.931 [2024-07-15 10:10:42.371959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.931 [2024-07-15 10:10:42.375626] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.931 [2024-07-15 10:10:42.376074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.931 [2024-07-15 10:10:42.376098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.931 [2024-07-15 10:10:42.379564] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.931 [2024-07-15 10:10:42.380027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.931 [2024-07-15 10:10:42.380047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.931 [2024-07-15 10:10:42.383366] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.931 [2024-07-15 10:10:42.383816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.931 [2024-07-15 10:10:42.383837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.931 [2024-07-15 10:10:42.387133] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.931 [2024-07-15 10:10:42.387578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.931 [2024-07-15 10:10:42.387601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.931 [2024-07-15 10:10:42.391020] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.931 [2024-07-15 10:10:42.391464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.931 [2024-07-15 10:10:42.391485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.931 [2024-07-15 10:10:42.394955] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.931 [2024-07-15 10:10:42.395374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.931 [2024-07-15 10:10:42.395403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.931 [2024-07-15 10:10:42.398772] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.931 [2024-07-15 10:10:42.399193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.931 [2024-07-15 10:10:42.399213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.931 [2024-07-15 10:10:42.402602] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.931 [2024-07-15 10:10:42.403038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.931 [2024-07-15 10:10:42.403060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.931 [2024-07-15 10:10:42.406495] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.931 [2024-07-15 10:10:42.406950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.931 [2024-07-15 10:10:42.406972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.931 [2024-07-15 10:10:42.410409] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.931 [2024-07-15 10:10:42.410858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.931 [2024-07-15 10:10:42.410875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.931 [2024-07-15 10:10:42.414267] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.931 [2024-07-15 10:10:42.414709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.931 [2024-07-15 10:10:42.414734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.931 [2024-07-15 10:10:42.418116] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.931 [2024-07-15 10:10:42.418564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.931 [2024-07-15 10:10:42.418591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.931 [2024-07-15 10:10:42.422022] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.931 [2024-07-15 10:10:42.422483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.931 [2024-07-15 10:10:42.422505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.931 [2024-07-15 10:10:42.425898] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.931 [2024-07-15 10:10:42.426310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.931 [2024-07-15 10:10:42.426331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.931 [2024-07-15 10:10:42.429754] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.931 [2024-07-15 10:10:42.430194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.931 [2024-07-15 10:10:42.430215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.931 [2024-07-15 10:10:42.433640] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.931 [2024-07-15 10:10:42.434071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.931 [2024-07-15 10:10:42.434091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.931 [2024-07-15 10:10:42.437439] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.931 [2024-07-15 10:10:42.437899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.931 [2024-07-15 10:10:42.437921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.931 [2024-07-15 10:10:42.441393] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.931 [2024-07-15 10:10:42.441829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.932 [2024-07-15 10:10:42.441855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.932 [2024-07-15 10:10:42.445421] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.932 [2024-07-15 10:10:42.445870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.932 [2024-07-15 10:10:42.445894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.932 [2024-07-15 10:10:42.449429] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.932 [2024-07-15 10:10:42.449869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.932 [2024-07-15 10:10:42.449890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.932 [2024-07-15 10:10:42.453289] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.932 [2024-07-15 10:10:42.453717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.932 [2024-07-15 10:10:42.453754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.932 [2024-07-15 10:10:42.457122] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.932 [2024-07-15 10:10:42.457535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.932 [2024-07-15 10:10:42.457557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.932 [2024-07-15 10:10:42.460993] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.932 [2024-07-15 10:10:42.461424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.932 [2024-07-15 10:10:42.461450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.932 [2024-07-15 10:10:42.464851] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.932 [2024-07-15 10:10:42.465288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.932 [2024-07-15 10:10:42.465308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.932 [2024-07-15 10:10:42.468705] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.932 [2024-07-15 10:10:42.469112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.932 [2024-07-15 10:10:42.469133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.932 [2024-07-15 10:10:42.472494] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.932 [2024-07-15 10:10:42.472935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.932 [2024-07-15 10:10:42.472956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.932 [2024-07-15 10:10:42.476410] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.932 [2024-07-15 10:10:42.476851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.932 [2024-07-15 10:10:42.476872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.932 [2024-07-15 10:10:42.480234] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.932 [2024-07-15 10:10:42.480696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.932 [2024-07-15 10:10:42.480717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.932 [2024-07-15 10:10:42.484108] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.932 [2024-07-15 10:10:42.484542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.932 [2024-07-15 10:10:42.484563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.932 [2024-07-15 10:10:42.487868] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.932 [2024-07-15 10:10:42.488302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.932 [2024-07-15 10:10:42.488322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.932 [2024-07-15 10:10:42.491699] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.932 [2024-07-15 10:10:42.492108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.932 [2024-07-15 10:10:42.492129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.932 [2024-07-15 10:10:42.495494] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.932 [2024-07-15 10:10:42.495951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.932 [2024-07-15 10:10:42.495973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.932 [2024-07-15 10:10:42.499418] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.932 [2024-07-15 10:10:42.499869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.932 [2024-07-15 10:10:42.499892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.932 [2024-07-15 10:10:42.503328] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.932 [2024-07-15 10:10:42.503761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.932 [2024-07-15 10:10:42.503785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.932 [2024-07-15 10:10:42.507100] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.932 [2024-07-15 10:10:42.507512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.932 [2024-07-15 10:10:42.507536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.932 [2024-07-15 10:10:42.511201] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:28.932 [2024-07-15 10:10:42.511622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.932 [2024-07-15 10:10:42.511645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.193 [2024-07-15 10:10:42.515230] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.193 [2024-07-15 10:10:42.515649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.193 [2024-07-15 10:10:42.515679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.193 [2024-07-15 10:10:42.519156] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.193 [2024-07-15 10:10:42.519598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.193 [2024-07-15 10:10:42.519622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.193 [2024-07-15 10:10:42.523211] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.193 [2024-07-15 10:10:42.523625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.193 [2024-07-15 10:10:42.523646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.193 [2024-07-15 10:10:42.527083] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.193 [2024-07-15 10:10:42.527525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.193 [2024-07-15 10:10:42.527546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.193 [2024-07-15 10:10:42.530971] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.193 [2024-07-15 10:10:42.531385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.193 [2024-07-15 10:10:42.531406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.193 [2024-07-15 10:10:42.534879] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.193 [2024-07-15 10:10:42.535308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.193 [2024-07-15 10:10:42.535328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.193 [2024-07-15 10:10:42.538745] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.193 [2024-07-15 10:10:42.539185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.193 [2024-07-15 10:10:42.539209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.193 [2024-07-15 10:10:42.542797] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.193 [2024-07-15 10:10:42.543193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.193 [2024-07-15 10:10:42.543215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.193 [2024-07-15 10:10:42.546725] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.193 [2024-07-15 10:10:42.547151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.193 [2024-07-15 10:10:42.547169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.193 [2024-07-15 10:10:42.550775] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.193 [2024-07-15 10:10:42.551223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.193 [2024-07-15 10:10:42.551252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.193 [2024-07-15 10:10:42.554768] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.193 [2024-07-15 10:10:42.555188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.193 [2024-07-15 10:10:42.555209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.193 [2024-07-15 10:10:42.558583] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.193 [2024-07-15 10:10:42.558989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.193 [2024-07-15 10:10:42.559011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.193 [2024-07-15 10:10:42.562790] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.193 [2024-07-15 10:10:42.563222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.193 [2024-07-15 10:10:42.563244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.193 [2024-07-15 10:10:42.567115] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.193 [2024-07-15 10:10:42.567564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.193 [2024-07-15 10:10:42.567593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.193 [2024-07-15 10:10:42.571326] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.193 [2024-07-15 10:10:42.571798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.193 [2024-07-15 10:10:42.571822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.193 [2024-07-15 10:10:42.575465] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.193 [2024-07-15 10:10:42.575911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.193 [2024-07-15 10:10:42.575934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.193 [2024-07-15 10:10:42.579499] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.193 [2024-07-15 10:10:42.579958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.193 [2024-07-15 10:10:42.579982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.193 [2024-07-15 10:10:42.583736] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.193 [2024-07-15 10:10:42.584162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.193 [2024-07-15 10:10:42.584184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.193 [2024-07-15 10:10:42.587741] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.193 [2024-07-15 10:10:42.588156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.193 [2024-07-15 10:10:42.588177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.193 [2024-07-15 10:10:42.591772] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.193 [2024-07-15 10:10:42.592188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.193 [2024-07-15 10:10:42.592209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.193 [2024-07-15 10:10:42.595775] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.193 [2024-07-15 10:10:42.596182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.193 [2024-07-15 10:10:42.596203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.193 [2024-07-15 10:10:42.599544] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.193 [2024-07-15 10:10:42.600006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.193 [2024-07-15 10:10:42.600028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.193 [2024-07-15 10:10:42.603453] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.193 [2024-07-15 10:10:42.603899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.193 [2024-07-15 10:10:42.603923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.193 [2024-07-15 10:10:42.607293] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.194 [2024-07-15 10:10:42.607735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.194 [2024-07-15 10:10:42.607756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.194 [2024-07-15 10:10:42.611189] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.194 [2024-07-15 10:10:42.611603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.194 [2024-07-15 10:10:42.611622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.194 [2024-07-15 10:10:42.615225] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.194 [2024-07-15 10:10:42.615635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.194 [2024-07-15 10:10:42.615667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.194 [2024-07-15 10:10:42.619096] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.194 [2024-07-15 10:10:42.619499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.194 [2024-07-15 10:10:42.619521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.194 [2024-07-15 10:10:42.623177] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.194 [2024-07-15 10:10:42.623595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.194 [2024-07-15 10:10:42.623617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.194 [2024-07-15 10:10:42.627209] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.194 [2024-07-15 10:10:42.627667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.194 [2024-07-15 10:10:42.627705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.194 [2024-07-15 10:10:42.631538] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.194 [2024-07-15 10:10:42.631987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.194 [2024-07-15 10:10:42.632012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.194 [2024-07-15 10:10:42.635792] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.194 [2024-07-15 10:10:42.636253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.194 [2024-07-15 10:10:42.636284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.194 [2024-07-15 10:10:42.640184] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.194 [2024-07-15 10:10:42.640653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.194 [2024-07-15 10:10:42.640695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.194 [2024-07-15 10:10:42.644423] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.194 [2024-07-15 10:10:42.644883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.194 [2024-07-15 10:10:42.644910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.194 [2024-07-15 10:10:42.648470] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.194 [2024-07-15 10:10:42.648913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.194 [2024-07-15 10:10:42.648937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.194 [2024-07-15 10:10:42.652553] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.194 [2024-07-15 10:10:42.653012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.194 [2024-07-15 10:10:42.653038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.194 [2024-07-15 10:10:42.656658] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.194 [2024-07-15 10:10:42.657115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.194 [2024-07-15 10:10:42.657139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.194 [2024-07-15 10:10:42.660728] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.194 [2024-07-15 10:10:42.661159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.194 [2024-07-15 10:10:42.661180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.194 [2024-07-15 10:10:42.664650] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.194 [2024-07-15 10:10:42.665118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.194 [2024-07-15 10:10:42.665140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.194 [2024-07-15 10:10:42.668682] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.194 [2024-07-15 10:10:42.669115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.194 [2024-07-15 10:10:42.669136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.194 [2024-07-15 10:10:42.672539] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.194 [2024-07-15 10:10:42.672980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.194 [2024-07-15 10:10:42.673001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.194 [2024-07-15 10:10:42.676484] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.194 [2024-07-15 10:10:42.676914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.194 [2024-07-15 10:10:42.676936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.194 [2024-07-15 10:10:42.680518] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.194 [2024-07-15 10:10:42.680983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.194 [2024-07-15 10:10:42.681009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.194 [2024-07-15 10:10:42.684532] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.194 [2024-07-15 10:10:42.684952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.194 [2024-07-15 10:10:42.684974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.194 [2024-07-15 10:10:42.688541] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.194 [2024-07-15 10:10:42.688999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.194 [2024-07-15 10:10:42.689021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.194 [2024-07-15 10:10:42.692612] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.194 [2024-07-15 10:10:42.693058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.194 [2024-07-15 10:10:42.693081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.194 [2024-07-15 10:10:42.696482] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.194 [2024-07-15 10:10:42.696929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.194 [2024-07-15 10:10:42.696957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.194 [2024-07-15 10:10:42.700490] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.194 [2024-07-15 10:10:42.700963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.194 [2024-07-15 10:10:42.700993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.194 [2024-07-15 10:10:42.704607] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.195 [2024-07-15 10:10:42.705065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.195 [2024-07-15 10:10:42.705096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.195 [2024-07-15 10:10:42.708835] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.195 [2024-07-15 10:10:42.709279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.195 [2024-07-15 10:10:42.709306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.195 [2024-07-15 10:10:42.712760] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.195 [2024-07-15 10:10:42.713213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.195 [2024-07-15 10:10:42.713242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.195 [2024-07-15 10:10:42.716869] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.195 [2024-07-15 10:10:42.717289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.195 [2024-07-15 10:10:42.717312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.195 [2024-07-15 10:10:42.720995] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.195 [2024-07-15 10:10:42.721432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.195 [2024-07-15 10:10:42.721459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.195 [2024-07-15 10:10:42.725176] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.195 [2024-07-15 10:10:42.725633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.195 [2024-07-15 10:10:42.725674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.195 [2024-07-15 10:10:42.729330] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.195 [2024-07-15 10:10:42.729789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.195 [2024-07-15 10:10:42.729812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.195 [2024-07-15 10:10:42.733331] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.195 [2024-07-15 10:10:42.733750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.195 [2024-07-15 10:10:42.733772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.195 [2024-07-15 10:10:42.737724] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.195 [2024-07-15 10:10:42.738192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.195 [2024-07-15 10:10:42.738219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.195 [2024-07-15 10:10:42.741968] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.195 [2024-07-15 10:10:42.742408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.195 [2024-07-15 10:10:42.742437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.195 [2024-07-15 10:10:42.746202] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.195 [2024-07-15 10:10:42.746654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.195 [2024-07-15 10:10:42.746694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.195 [2024-07-15 10:10:42.750456] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.195 [2024-07-15 10:10:42.750937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.195 [2024-07-15 10:10:42.750967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.195 [2024-07-15 10:10:42.754700] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.195 [2024-07-15 10:10:42.755148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.195 [2024-07-15 10:10:42.755176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.195 [2024-07-15 10:10:42.758832] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.195 [2024-07-15 10:10:42.759294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.195 [2024-07-15 10:10:42.759321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.195 [2024-07-15 10:10:42.762929] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.195 [2024-07-15 10:10:42.763352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.195 [2024-07-15 10:10:42.763379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.195 [2024-07-15 10:10:42.766842] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.195 [2024-07-15 10:10:42.767266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.195 [2024-07-15 10:10:42.767299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.195 [2024-07-15 10:10:42.770785] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.195 [2024-07-15 10:10:42.771262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.195 [2024-07-15 10:10:42.771286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.195 [2024-07-15 10:10:42.774903] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.195 [2024-07-15 10:10:42.775371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.195 [2024-07-15 10:10:42.775400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.456 [2024-07-15 10:10:42.779129] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.456 [2024-07-15 10:10:42.779543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.456 [2024-07-15 10:10:42.779564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.456 [2024-07-15 10:10:42.783138] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.456 [2024-07-15 10:10:42.783572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.456 [2024-07-15 10:10:42.783597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.456 [2024-07-15 10:10:42.787149] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.456 [2024-07-15 10:10:42.787565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.456 [2024-07-15 10:10:42.787586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.456 [2024-07-15 10:10:42.791035] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.456 [2024-07-15 10:10:42.791488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.456 [2024-07-15 10:10:42.791527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.456 [2024-07-15 10:10:42.794948] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.456 [2024-07-15 10:10:42.795343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.456 [2024-07-15 10:10:42.795364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.456 [2024-07-15 10:10:42.798737] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.456 [2024-07-15 10:10:42.799157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.456 [2024-07-15 10:10:42.799178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.456 [2024-07-15 10:10:42.802737] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.456 [2024-07-15 10:10:42.803156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.456 [2024-07-15 10:10:42.803176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.456 [2024-07-15 10:10:42.806816] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.456 [2024-07-15 10:10:42.807232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.456 [2024-07-15 10:10:42.807252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.456 [2024-07-15 10:10:42.810907] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.456 [2024-07-15 10:10:42.811329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.456 [2024-07-15 10:10:42.811353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.456 [2024-07-15 10:10:42.815154] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.456 [2024-07-15 10:10:42.815615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.456 [2024-07-15 10:10:42.815639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.456 [2024-07-15 10:10:42.819236] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.456 [2024-07-15 10:10:42.819667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.456 [2024-07-15 10:10:42.819705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.456 [2024-07-15 10:10:42.823449] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.456 [2024-07-15 10:10:42.823899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.456 [2024-07-15 10:10:42.823922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.456 [2024-07-15 10:10:42.827504] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.456 [2024-07-15 10:10:42.827964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.456 [2024-07-15 10:10:42.827985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.456 [2024-07-15 10:10:42.831591] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.457 [2024-07-15 10:10:42.832014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.457 [2024-07-15 10:10:42.832035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.457 [2024-07-15 10:10:42.835601] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.457 [2024-07-15 10:10:42.836022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.457 [2024-07-15 10:10:42.836043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.457 [2024-07-15 10:10:42.839551] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.457 [2024-07-15 10:10:42.839997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.457 [2024-07-15 10:10:42.840017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.457 [2024-07-15 10:10:42.843730] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.457 [2024-07-15 10:10:42.844206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.457 [2024-07-15 10:10:42.844235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.457 [2024-07-15 10:10:42.847661] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.457 [2024-07-15 10:10:42.848078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.457 [2024-07-15 10:10:42.848100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.457 [2024-07-15 10:10:42.851602] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.457 [2024-07-15 10:10:42.852070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.457 [2024-07-15 10:10:42.852090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.457 [2024-07-15 10:10:42.855600] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.457 [2024-07-15 10:10:42.856046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.457 [2024-07-15 10:10:42.856074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.457 [2024-07-15 10:10:42.859742] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.457 [2024-07-15 10:10:42.860192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.457 [2024-07-15 10:10:42.860218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.457 [2024-07-15 10:10:42.863968] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.457 [2024-07-15 10:10:42.864389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.457 [2024-07-15 10:10:42.864432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.457 [2024-07-15 10:10:42.868115] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.457 [2024-07-15 10:10:42.868534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.457 [2024-07-15 10:10:42.868560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.457 [2024-07-15 10:10:42.872192] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.457 [2024-07-15 10:10:42.872640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.457 [2024-07-15 10:10:42.872674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.457 [2024-07-15 10:10:42.876147] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.457 [2024-07-15 10:10:42.876586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.457 [2024-07-15 10:10:42.876619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.457 [2024-07-15 10:10:42.880216] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.457 [2024-07-15 10:10:42.880685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.457 [2024-07-15 10:10:42.880713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.457 [2024-07-15 10:10:42.884354] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.457 [2024-07-15 10:10:42.884863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.457 [2024-07-15 10:10:42.884899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.457 [2024-07-15 10:10:42.888523] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.457 [2024-07-15 10:10:42.888996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.457 [2024-07-15 10:10:42.889046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.457 [2024-07-15 10:10:42.892700] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.457 [2024-07-15 10:10:42.893184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.457 [2024-07-15 10:10:42.893215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.457 [2024-07-15 10:10:42.896914] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.457 [2024-07-15 10:10:42.897373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.457 [2024-07-15 10:10:42.897403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.457 [2024-07-15 10:10:42.901104] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.457 [2024-07-15 10:10:42.901520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.457 [2024-07-15 10:10:42.901544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.457 [2024-07-15 10:10:42.905036] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.457 [2024-07-15 10:10:42.905478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.457 [2024-07-15 10:10:42.905505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.457 [2024-07-15 10:10:42.908913] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.457 [2024-07-15 10:10:42.909344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.457 [2024-07-15 10:10:42.909371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.457 [2024-07-15 10:10:42.912789] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.457 [2024-07-15 10:10:42.913221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.457 [2024-07-15 10:10:42.913248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.457 [2024-07-15 10:10:42.916697] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.457 [2024-07-15 10:10:42.917107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.457 [2024-07-15 10:10:42.917130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.457 [2024-07-15 10:10:42.920466] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.457 [2024-07-15 10:10:42.920894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.458 [2024-07-15 10:10:42.920916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.458 [2024-07-15 10:10:42.924439] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.458 [2024-07-15 10:10:42.924856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.458 [2024-07-15 10:10:42.924877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.458 [2024-07-15 10:10:42.928485] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.458 [2024-07-15 10:10:42.928948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.458 [2024-07-15 10:10:42.928974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.458 [2024-07-15 10:10:42.932490] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.458 [2024-07-15 10:10:42.932941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.458 [2024-07-15 10:10:42.932964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.458 [2024-07-15 10:10:42.936468] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.458 [2024-07-15 10:10:42.936893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.458 [2024-07-15 10:10:42.936914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.458 [2024-07-15 10:10:42.940310] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.458 [2024-07-15 10:10:42.940728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.458 [2024-07-15 10:10:42.940745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.458 [2024-07-15 10:10:42.944181] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.458 [2024-07-15 10:10:42.944649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.458 [2024-07-15 10:10:42.944683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.458 [2024-07-15 10:10:42.948146] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.458 [2024-07-15 10:10:42.948591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.458 [2024-07-15 10:10:42.948613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.458 [2024-07-15 10:10:42.952171] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.458 [2024-07-15 10:10:42.952586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.458 [2024-07-15 10:10:42.952608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.458 [2024-07-15 10:10:42.955996] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.458 [2024-07-15 10:10:42.956430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.458 [2024-07-15 10:10:42.956452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.458 [2024-07-15 10:10:42.959918] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.458 [2024-07-15 10:10:42.960386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.458 [2024-07-15 10:10:42.960411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.458 [2024-07-15 10:10:42.964060] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.458 [2024-07-15 10:10:42.964520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.458 [2024-07-15 10:10:42.964544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.458 [2024-07-15 10:10:42.968008] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.458 [2024-07-15 10:10:42.968450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.458 [2024-07-15 10:10:42.968474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.458 [2024-07-15 10:10:42.971958] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.458 [2024-07-15 10:10:42.972403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.458 [2024-07-15 10:10:42.972425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.458 [2024-07-15 10:10:42.975847] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.458 [2024-07-15 10:10:42.976248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.458 [2024-07-15 10:10:42.976270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.458 [2024-07-15 10:10:42.979751] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.458 [2024-07-15 10:10:42.980199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.458 [2024-07-15 10:10:42.980221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.458 [2024-07-15 10:10:42.983738] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.458 [2024-07-15 10:10:42.984180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.458 [2024-07-15 10:10:42.984200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.458 [2024-07-15 10:10:42.987715] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.458 [2024-07-15 10:10:42.988151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.458 [2024-07-15 10:10:42.988172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.458 [2024-07-15 10:10:42.991646] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.458 [2024-07-15 10:10:42.992063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.458 [2024-07-15 10:10:42.992085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.458 [2024-07-15 10:10:42.995582] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.458 [2024-07-15 10:10:42.996038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.458 [2024-07-15 10:10:42.996075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.458 [2024-07-15 10:10:42.999426] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.458 [2024-07-15 10:10:42.999859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.458 [2024-07-15 10:10:42.999880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.458 [2024-07-15 10:10:43.003302] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.459 [2024-07-15 10:10:43.003751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.459 [2024-07-15 10:10:43.003773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.459 [2024-07-15 10:10:43.007140] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.459 [2024-07-15 10:10:43.007567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.459 [2024-07-15 10:10:43.007588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.459 [2024-07-15 10:10:43.010982] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.459 [2024-07-15 10:10:43.011403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.459 [2024-07-15 10:10:43.011423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.459 [2024-07-15 10:10:43.014812] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.459 [2024-07-15 10:10:43.015208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.459 [2024-07-15 10:10:43.015228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.459 [2024-07-15 10:10:43.018717] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.459 [2024-07-15 10:10:43.019156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.459 [2024-07-15 10:10:43.019184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.459 [2024-07-15 10:10:43.022653] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.459 [2024-07-15 10:10:43.023108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.459 [2024-07-15 10:10:43.023145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.459 [2024-07-15 10:10:43.026530] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.459 [2024-07-15 10:10:43.026954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.459 [2024-07-15 10:10:43.026976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.459 [2024-07-15 10:10:43.030399] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.459 [2024-07-15 10:10:43.030820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.459 [2024-07-15 10:10:43.030841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.459 [2024-07-15 10:10:43.034357] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.459 [2024-07-15 10:10:43.034794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.459 [2024-07-15 10:10:43.034821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.459 [2024-07-15 10:10:43.038289] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.459 [2024-07-15 10:10:43.038757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.459 [2024-07-15 10:10:43.038780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.720 [2024-07-15 10:10:43.042449] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.720 [2024-07-15 10:10:43.042884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.720 [2024-07-15 10:10:43.042909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.720 [2024-07-15 10:10:43.046421] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.720 [2024-07-15 10:10:43.046858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.720 [2024-07-15 10:10:43.046881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.720 [2024-07-15 10:10:43.050398] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.720 [2024-07-15 10:10:43.050840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.721 [2024-07-15 10:10:43.050863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.721 [2024-07-15 10:10:43.054300] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.721 [2024-07-15 10:10:43.054712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.721 [2024-07-15 10:10:43.054734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.721 [2024-07-15 10:10:43.058236] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.721 [2024-07-15 10:10:43.058645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.721 [2024-07-15 10:10:43.058675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.721 [2024-07-15 10:10:43.062075] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.721 [2024-07-15 10:10:43.062480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.721 [2024-07-15 10:10:43.062502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.721 [2024-07-15 10:10:43.066033] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.721 [2024-07-15 10:10:43.066461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.721 [2024-07-15 10:10:43.066482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.721 [2024-07-15 10:10:43.069973] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.721 [2024-07-15 10:10:43.070372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.721 [2024-07-15 10:10:43.070394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.721 [2024-07-15 10:10:43.073884] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.721 [2024-07-15 10:10:43.074296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.721 [2024-07-15 10:10:43.074317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.721 [2024-07-15 10:10:43.077729] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.721 [2024-07-15 10:10:43.078134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.721 [2024-07-15 10:10:43.078155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.721 [2024-07-15 10:10:43.081499] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.721 [2024-07-15 10:10:43.081964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.721 [2024-07-15 10:10:43.081986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.721 [2024-07-15 10:10:43.085475] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.721 [2024-07-15 10:10:43.085921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.721 [2024-07-15 10:10:43.085942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.721 [2024-07-15 10:10:43.089399] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.721 [2024-07-15 10:10:43.089857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.721 [2024-07-15 10:10:43.089880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.721 [2024-07-15 10:10:43.093428] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.721 [2024-07-15 10:10:43.093835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.721 [2024-07-15 10:10:43.093856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.721 [2024-07-15 10:10:43.097421] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.721 [2024-07-15 10:10:43.097883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.721 [2024-07-15 10:10:43.097904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.721 [2024-07-15 10:10:43.101331] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.721 [2024-07-15 10:10:43.101782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.721 [2024-07-15 10:10:43.101803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.721 [2024-07-15 10:10:43.105284] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.721 [2024-07-15 10:10:43.105734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.721 [2024-07-15 10:10:43.105756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.721 [2024-07-15 10:10:43.109224] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.721 [2024-07-15 10:10:43.109696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.721 [2024-07-15 10:10:43.109716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.721 [2024-07-15 10:10:43.113090] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.721 [2024-07-15 10:10:43.113522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.721 [2024-07-15 10:10:43.113541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.721 [2024-07-15 10:10:43.117021] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.721 [2024-07-15 10:10:43.117457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.721 [2024-07-15 10:10:43.117481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.721 [2024-07-15 10:10:43.120907] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.721 [2024-07-15 10:10:43.121337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.721 [2024-07-15 10:10:43.121359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.721 [2024-07-15 10:10:43.124714] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.721 [2024-07-15 10:10:43.125152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.721 [2024-07-15 10:10:43.125174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.721 [2024-07-15 10:10:43.128664] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.721 [2024-07-15 10:10:43.129090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.721 [2024-07-15 10:10:43.129111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.721 [2024-07-15 10:10:43.132502] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.721 [2024-07-15 10:10:43.132938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.721 [2024-07-15 10:10:43.132958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.721 [2024-07-15 10:10:43.136257] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.721 [2024-07-15 10:10:43.136704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.721 [2024-07-15 10:10:43.136725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.721 [2024-07-15 10:10:43.140034] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.721 [2024-07-15 10:10:43.140466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.721 [2024-07-15 10:10:43.140486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.721 [2024-07-15 10:10:43.143863] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.721 [2024-07-15 10:10:43.144281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.721 [2024-07-15 10:10:43.144301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.721 [2024-07-15 10:10:43.147708] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.721 [2024-07-15 10:10:43.148095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.721 [2024-07-15 10:10:43.148116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.721 [2024-07-15 10:10:43.151453] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.722 [2024-07-15 10:10:43.151885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.722 [2024-07-15 10:10:43.151903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.722 [2024-07-15 10:10:43.155433] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.722 [2024-07-15 10:10:43.155879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.722 [2024-07-15 10:10:43.155900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.722 [2024-07-15 10:10:43.159381] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.722 [2024-07-15 10:10:43.159822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.722 [2024-07-15 10:10:43.159842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.722 [2024-07-15 10:10:43.163200] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.722 [2024-07-15 10:10:43.163612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.722 [2024-07-15 10:10:43.163634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.722 [2024-07-15 10:10:43.167122] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.722 [2024-07-15 10:10:43.167542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.722 [2024-07-15 10:10:43.167564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.722 [2024-07-15 10:10:43.171300] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.722 [2024-07-15 10:10:43.171727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.722 [2024-07-15 10:10:43.171748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.722 [2024-07-15 10:10:43.175355] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.722 [2024-07-15 10:10:43.175805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.722 [2024-07-15 10:10:43.175830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.722 [2024-07-15 10:10:43.179432] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.722 [2024-07-15 10:10:43.179864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.722 [2024-07-15 10:10:43.179887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.722 [2024-07-15 10:10:43.183705] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.722 [2024-07-15 10:10:43.184151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.722 [2024-07-15 10:10:43.184174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.722 [2024-07-15 10:10:43.187713] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.722 [2024-07-15 10:10:43.188139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.722 [2024-07-15 10:10:43.188161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.722 [2024-07-15 10:10:43.192079] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.722 [2024-07-15 10:10:43.192567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.722 [2024-07-15 10:10:43.192590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.722 [2024-07-15 10:10:43.196601] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.722 [2024-07-15 10:10:43.197098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.722 [2024-07-15 10:10:43.197130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.722 [2024-07-15 10:10:43.201099] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.722 [2024-07-15 10:10:43.201561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.722 [2024-07-15 10:10:43.201593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.722 [2024-07-15 10:10:43.205447] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.722 [2024-07-15 10:10:43.205912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.722 [2024-07-15 10:10:43.205936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.722 [2024-07-15 10:10:43.209843] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.722 [2024-07-15 10:10:43.210276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.722 [2024-07-15 10:10:43.210300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.722 [2024-07-15 10:10:43.213970] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.722 [2024-07-15 10:10:43.214400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.722 [2024-07-15 10:10:43.214422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.722 [2024-07-15 10:10:43.218274] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.722 [2024-07-15 10:10:43.218724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.722 [2024-07-15 10:10:43.218747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.722 [2024-07-15 10:10:43.222870] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.722 [2024-07-15 10:10:43.223334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.722 [2024-07-15 10:10:43.223365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.722 [2024-07-15 10:10:43.227097] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.722 [2024-07-15 10:10:43.227495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.722 [2024-07-15 10:10:43.227525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.722 [2024-07-15 10:10:43.230909] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.722 [2024-07-15 10:10:43.231348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.722 [2024-07-15 10:10:43.231369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.722 [2024-07-15 10:10:43.234737] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.722 [2024-07-15 10:10:43.235117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.722 [2024-07-15 10:10:43.235138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.722 [2024-07-15 10:10:43.238583] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.722 [2024-07-15 10:10:43.238973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.722 [2024-07-15 10:10:43.238995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.722 [2024-07-15 10:10:43.242364] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.722 [2024-07-15 10:10:43.242815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.722 [2024-07-15 10:10:43.242838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.722 [2024-07-15 10:10:43.246474] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.722 [2024-07-15 10:10:43.246882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.722 [2024-07-15 10:10:43.246903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.722 [2024-07-15 10:10:43.250233] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.722 [2024-07-15 10:10:43.250635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.722 [2024-07-15 10:10:43.250671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.722 [2024-07-15 10:10:43.253913] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.722 [2024-07-15 10:10:43.254289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.722 [2024-07-15 10:10:43.254309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.722 [2024-07-15 10:10:43.257569] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.722 [2024-07-15 10:10:43.257982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.723 [2024-07-15 10:10:43.258002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.723 [2024-07-15 10:10:43.261121] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.723 [2024-07-15 10:10:43.261494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.723 [2024-07-15 10:10:43.261515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.723 [2024-07-15 10:10:43.264842] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.723 [2024-07-15 10:10:43.265230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.723 [2024-07-15 10:10:43.265253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.723 [2024-07-15 10:10:43.268530] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.723 [2024-07-15 10:10:43.268930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.723 [2024-07-15 10:10:43.268951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.723 [2024-07-15 10:10:43.272091] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.723 [2024-07-15 10:10:43.272512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.723 [2024-07-15 10:10:43.272535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.723 [2024-07-15 10:10:43.275936] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.723 [2024-07-15 10:10:43.276324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.723 [2024-07-15 10:10:43.276346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.723 [2024-07-15 10:10:43.279586] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.723 [2024-07-15 10:10:43.279970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.723 [2024-07-15 10:10:43.279991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.723 [2024-07-15 10:10:43.283251] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.723 [2024-07-15 10:10:43.283643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.723 [2024-07-15 10:10:43.283672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.723 [2024-07-15 10:10:43.286984] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.723 [2024-07-15 10:10:43.287366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.723 [2024-07-15 10:10:43.287388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.723 [2024-07-15 10:10:43.290680] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.723 [2024-07-15 10:10:43.291078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.723 [2024-07-15 10:10:43.291098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.723 [2024-07-15 10:10:43.294355] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.723 [2024-07-15 10:10:43.294765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.723 [2024-07-15 10:10:43.294781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.723 [2024-07-15 10:10:43.297980] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.723 [2024-07-15 10:10:43.298376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.723 [2024-07-15 10:10:43.298418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.723 [2024-07-15 10:10:43.301788] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.723 [2024-07-15 10:10:43.302174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.723 [2024-07-15 10:10:43.302214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.983 [2024-07-15 10:10:43.305792] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.983 [2024-07-15 10:10:43.306166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.983 [2024-07-15 10:10:43.306188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.983 [2024-07-15 10:10:43.309464] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.983 [2024-07-15 10:10:43.309877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.983 [2024-07-15 10:10:43.309900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.983 [2024-07-15 10:10:43.313203] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.983 [2024-07-15 10:10:43.313621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.983 [2024-07-15 10:10:43.313644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.983 [2024-07-15 10:10:43.316914] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.983 [2024-07-15 10:10:43.317337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.983 [2024-07-15 10:10:43.317359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.983 [2024-07-15 10:10:43.320554] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.983 [2024-07-15 10:10:43.320967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.983 [2024-07-15 10:10:43.320990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.983 [2024-07-15 10:10:43.324231] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.983 [2024-07-15 10:10:43.324628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.983 [2024-07-15 10:10:43.324650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.983 [2024-07-15 10:10:43.327861] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.983 [2024-07-15 10:10:43.328228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.983 [2024-07-15 10:10:43.328248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.983 [2024-07-15 10:10:43.331632] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.983 [2024-07-15 10:10:43.332032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.983 [2024-07-15 10:10:43.332052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.983 [2024-07-15 10:10:43.335249] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.983 [2024-07-15 10:10:43.335632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.983 [2024-07-15 10:10:43.335654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.983 [2024-07-15 10:10:43.339091] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.983 [2024-07-15 10:10:43.339466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.983 [2024-07-15 10:10:43.339488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.983 [2024-07-15 10:10:43.342661] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.983 [2024-07-15 10:10:43.343050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.983 [2024-07-15 10:10:43.343070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.983 [2024-07-15 10:10:43.346276] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.983 [2024-07-15 10:10:43.346668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.983 [2024-07-15 10:10:43.346689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.983 [2024-07-15 10:10:43.349921] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.983 [2024-07-15 10:10:43.350304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.983 [2024-07-15 10:10:43.350325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.983 [2024-07-15 10:10:43.353512] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.983 [2024-07-15 10:10:43.353904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.983 [2024-07-15 10:10:43.353921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.983 [2024-07-15 10:10:43.357109] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.983 [2024-07-15 10:10:43.357508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.983 [2024-07-15 10:10:43.357536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.983 [2024-07-15 10:10:43.360785] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.983 [2024-07-15 10:10:43.361157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.983 [2024-07-15 10:10:43.361178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.983 [2024-07-15 10:10:43.364257] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.983 [2024-07-15 10:10:43.364662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.983 [2024-07-15 10:10:43.364691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.983 [2024-07-15 10:10:43.367777] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.983 [2024-07-15 10:10:43.368171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.983 [2024-07-15 10:10:43.368191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.983 [2024-07-15 10:10:43.371558] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.983 [2024-07-15 10:10:43.371949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.983 [2024-07-15 10:10:43.371970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.983 [2024-07-15 10:10:43.375221] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.983 [2024-07-15 10:10:43.375616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.983 [2024-07-15 10:10:43.375638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.983 [2024-07-15 10:10:43.378936] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.983 [2024-07-15 10:10:43.379308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.983 [2024-07-15 10:10:43.379329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.983 [2024-07-15 10:10:43.382524] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.983 [2024-07-15 10:10:43.382925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.983 [2024-07-15 10:10:43.382944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.983 [2024-07-15 10:10:43.386067] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.983 [2024-07-15 10:10:43.386435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.983 [2024-07-15 10:10:43.386455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.983 [2024-07-15 10:10:43.389613] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.983 [2024-07-15 10:10:43.390002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.983 [2024-07-15 10:10:43.390022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.983 [2024-07-15 10:10:43.393195] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.983 [2024-07-15 10:10:43.393583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.983 [2024-07-15 10:10:43.393604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.984 [2024-07-15 10:10:43.396756] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b01bc0) with pdu=0x2000190fef90 00:29:29.984 [2024-07-15 10:10:43.397091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.984 [2024-07-15 10:10:43.397111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.984 00:29:29.984 Latency(us) 00:29:29.984 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:29.984 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:29.984 nvme0n1 : 2.00 7796.01 974.50 0.00 0.00 2048.68 1552.54 5208.54 00:29:29.984 =================================================================================================================== 00:29:29.984 Total : 7796.01 974.50 0.00 0.00 2048.68 1552.54 5208.54 00:29:29.984 0 00:29:29.984 10:10:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:29.984 10:10:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:29.984 10:10:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:29.984 10:10:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:29.984 | .driver_specific 00:29:29.984 | .nvme_error 00:29:29.984 | .status_code 00:29:29.984 | .command_transient_transport_error' 00:29:30.243 10:10:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 503 > 0 )) 00:29:30.243 10:10:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93725 00:29:30.243 10:10:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 93725 ']' 00:29:30.243 10:10:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 93725 00:29:30.243 10:10:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:30.243 10:10:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:30.243 10:10:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93725 00:29:30.243 10:10:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:30.243 10:10:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:30.243 killing process with pid 93725 00:29:30.243 10:10:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93725' 00:29:30.243 10:10:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 93725 00:29:30.243 Received shutdown signal, test time was about 2.000000 seconds 00:29:30.243 00:29:30.243 Latency(us) 00:29:30.243 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:30.243 =================================================================================================================== 00:29:30.243 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:30.243 10:10:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 93725 00:29:30.503 10:10:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 93421 00:29:30.503 10:10:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 93421 ']' 00:29:30.503 10:10:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 93421 00:29:30.503 10:10:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:30.503 10:10:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:30.503 10:10:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93421 00:29:30.503 10:10:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:30.503 10:10:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:30.503 10:10:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93421' 00:29:30.503 killing process with pid 93421 00:29:30.503 10:10:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 93421 00:29:30.503 10:10:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 93421 00:29:30.503 00:29:30.503 real 0m17.437s 00:29:30.503 user 0m32.962s 00:29:30.503 sys 0m4.343s 00:29:30.503 10:10:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:30.503 10:10:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:30.503 ************************************ 00:29:30.503 END TEST nvmf_digest_error 00:29:30.503 ************************************ 00:29:30.763 10:10:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:29:30.763 10:10:44 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:29:30.763 10:10:44 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:29:30.763 10:10:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:30.763 10:10:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:29:30.763 10:10:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:30.763 10:10:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:29:30.763 10:10:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:30.763 10:10:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:30.763 rmmod nvme_tcp 00:29:30.763 rmmod nvme_fabrics 00:29:30.763 rmmod nvme_keyring 00:29:30.763 10:10:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:30.763 10:10:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:29:30.763 10:10:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:29:30.763 10:10:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 93421 ']' 00:29:30.763 10:10:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 93421 00:29:30.763 10:10:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 93421 ']' 00:29:30.763 10:10:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 93421 00:29:30.763 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (93421) - No such process 00:29:30.763 Process with pid 93421 is not found 00:29:30.763 10:10:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 93421 is not found' 00:29:30.763 10:10:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:30.763 10:10:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:30.763 10:10:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:30.763 10:10:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:30.763 10:10:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:30.763 10:10:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:30.763 10:10:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:30.763 10:10:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:30.763 10:10:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:29:31.023 00:29:31.023 real 0m35.587s 00:29:31.023 user 1m5.682s 00:29:31.023 sys 0m9.015s 00:29:31.024 10:10:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:31.024 10:10:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:31.024 ************************************ 00:29:31.024 END TEST nvmf_digest 00:29:31.024 ************************************ 00:29:31.024 10:10:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:31.024 10:10:44 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 1 -eq 1 ]] 00:29:31.024 10:10:44 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ tcp == \t\c\p ]] 00:29:31.024 10:10:44 nvmf_tcp -- nvmf/nvmf.sh@113 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:29:31.024 10:10:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:31.024 10:10:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:31.024 10:10:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:31.024 ************************************ 00:29:31.024 START TEST nvmf_mdns_discovery 00:29:31.024 ************************************ 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:29:31.024 * Looking for test storage... 00:29:31.024 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # uname -s 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@5 -- # export PATH 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@47 -- # : 0 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@13 -- # DISCOVERY_FILTER=address 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@14 -- # DISCOVERY_PORT=8009 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@18 -- # NQN=nqn.2016-06.io.spdk:cnode 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@19 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@21 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@22 -- # HOST_SOCK=/tmp/host.sock 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@24 -- # nvmftestinit 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:31.024 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:29:31.284 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:29:31.284 Cannot find device "nvmf_tgt_br" 00:29:31.284 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # true 00:29:31.284 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:29:31.284 Cannot find device "nvmf_tgt_br2" 00:29:31.284 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # true 00:29:31.284 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:29:31.284 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:29:31.284 Cannot find device "nvmf_tgt_br" 00:29:31.284 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # true 00:29:31.284 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:29:31.284 Cannot find device "nvmf_tgt_br2" 00:29:31.284 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # true 00:29:31.284 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:29:31.284 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:29:31.284 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:31.284 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:31.284 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # true 00:29:31.284 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:31.284 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:31.284 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # true 00:29:31.284 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:29:31.284 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:31.284 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:31.284 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:31.284 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:31.284 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:31.284 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:31.284 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:29:31.284 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:29:31.284 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:29:31.284 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:29:31.284 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:29:31.284 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:29:31.284 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:31.284 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:31.284 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:31.284 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:29:31.284 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:29:31.284 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:29:31.284 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:31.284 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:31.544 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:31.544 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:31.544 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:29:31.544 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:31.544 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:29:31.544 00:29:31.544 --- 10.0.0.2 ping statistics --- 00:29:31.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:31.544 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:29:31.544 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:29:31.544 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:31.544 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:29:31.544 00:29:31.544 --- 10.0.0.3 ping statistics --- 00:29:31.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:31.544 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:29:31.544 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:31.544 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:31.544 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:29:31.544 00:29:31.544 --- 10.0.0.1 ping statistics --- 00:29:31.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:31.544 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:29:31.544 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:31.544 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@433 -- # return 0 00:29:31.544 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:31.544 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:31.544 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:31.544 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:31.544 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:31.544 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:31.544 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:31.544 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@29 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:29:31.544 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:31.544 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:31.544 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:31.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:31.544 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@481 -- # nvmfpid=94011 00:29:31.544 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@482 -- # waitforlisten 94011 00:29:31.544 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@829 -- # '[' -z 94011 ']' 00:29:31.544 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:31.544 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:31.544 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:31.544 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:31.544 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:31.544 10:10:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:29:31.544 [2024-07-15 10:10:44.981761] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:29:31.544 [2024-07-15 10:10:44.981831] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:31.544 [2024-07-15 10:10:45.118473] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:31.804 [2024-07-15 10:10:45.222827] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:31.804 [2024-07-15 10:10:45.222874] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:31.804 [2024-07-15 10:10:45.222880] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:31.804 [2024-07-15 10:10:45.222885] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:31.804 [2024-07-15 10:10:45.222889] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:31.804 [2024-07-15 10:10:45.222907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:32.374 10:10:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:32.374 10:10:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@862 -- # return 0 00:29:32.374 10:10:45 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:32.374 10:10:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:32.374 10:10:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:32.374 10:10:45 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:32.374 10:10:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@31 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:29:32.374 10:10:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.374 10:10:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:32.374 10:10:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.374 10:10:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@32 -- # rpc_cmd framework_start_init 00:29:32.374 10:10:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.374 10:10:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:32.638 10:10:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.638 10:10:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:32.638 10:10:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.638 10:10:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:32.638 [2024-07-15 10:10:45.967511] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:32.638 10:10:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.638 10:10:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:29:32.638 10:10:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.638 10:10:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:32.638 [2024-07-15 10:10:45.979586] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:29:32.638 10:10:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.638 10:10:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null0 1000 512 00:29:32.638 10:10:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.638 10:10:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:32.638 null0 00:29:32.638 10:10:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.638 10:10:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null1 1000 512 00:29:32.638 10:10:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.638 10:10:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:32.638 null1 00:29:32.638 10:10:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.638 10:10:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null2 1000 512 00:29:32.638 10:10:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.638 10:10:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:32.638 null2 00:29:32.638 10:10:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.638 10:10:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_null_create null3 1000 512 00:29:32.638 10:10:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.638 10:10:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:32.638 null3 00:29:32.638 10:10:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.638 10:10:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@40 -- # rpc_cmd bdev_wait_for_examine 00:29:32.638 10:10:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.638 10:10:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:32.638 10:10:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.638 10:10:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@48 -- # hostpid=94068 00:29:32.638 10:10:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:29:32.638 10:10:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@49 -- # waitforlisten 94068 /tmp/host.sock 00:29:32.638 10:10:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@829 -- # '[' -z 94068 ']' 00:29:32.638 10:10:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:29:32.638 10:10:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:32.638 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:29:32.638 10:10:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:29:32.638 10:10:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:32.638 10:10:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:32.638 [2024-07-15 10:10:46.103867] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:29:32.638 [2024-07-15 10:10:46.103953] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94068 ] 00:29:32.898 [2024-07-15 10:10:46.240169] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:32.898 [2024-07-15 10:10:46.346158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:33.467 10:10:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:33.467 10:10:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@862 -- # return 0 00:29:33.467 10:10:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:29:33.467 10:10:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@52 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahipid;' EXIT 00:29:33.467 10:10:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # avahi-daemon --kill 00:29:33.726 10:10:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@58 -- # avahipid=94097 00:29:33.726 10:10:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@59 -- # sleep 1 00:29:33.726 10:10:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:29:33.726 10:10:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:29:33.726 Process 978 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:29:33.726 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:29:33.726 Successfully dropped root privileges. 00:29:33.726 avahi-daemon 0.8 starting up. 00:29:33.726 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:29:33.726 Successfully called chroot(). 00:29:33.726 Successfully dropped remaining capabilities. 00:29:33.726 No service file found in /etc/avahi/services. 00:29:33.726 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:29:33.726 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:29:33.726 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:29:33.726 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:29:33.726 Network interface enumeration completed. 00:29:33.726 Registering new address record for fe80::d4b2:80ff:fe2b:84cf on nvmf_tgt_if2.*. 00:29:33.726 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:29:33.726 Registering new address record for fe80::e073:5fff:fecc:6446 on nvmf_tgt_if.*. 00:29:33.726 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:29:34.664 Server startup complete. Host name is fedora38-cloud-1716830599-074-updated-1705279005.local. Local service cookie is 1029825241. 00:29:34.664 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:29:34.664 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.664 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:34.664 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.664 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:29:34.664 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.664 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:34.664 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.664 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # notify_id=0 00:29:34.664 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # get_subsystem_names 00:29:34.664 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:34.664 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.664 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:34.664 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:29:34.664 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:29:34.664 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:29:34.664 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.664 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:29:34.664 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # get_bdev_list 00:29:34.664 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:34.664 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.664 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:34.664 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:29:34.664 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:29:34.664 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:29:34.664 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.664 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # [[ '' == '' ]] 00:29:34.664 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:29:34.664 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.664 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:34.664 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.664 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # get_subsystem_names 00:29:34.664 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:34.664 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.664 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:34.664 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:29:34.664 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:29:34.664 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:29:34.664 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.924 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:29:34.924 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # get_bdev_list 00:29:34.924 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:29:34.924 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:34.924 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:29:34.924 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:29:34.924 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.924 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:34.924 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.924 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ '' == '' ]] 00:29:34.924 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:29:34.924 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.924 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:34.924 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.924 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # get_subsystem_names 00:29:34.924 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:29:34.924 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:34.924 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.924 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:29:34.924 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:34.924 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:29:34.924 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.924 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:29:34.924 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # get_bdev_list 00:29:34.924 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:29:34.924 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:34.924 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:29:34.924 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.924 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:34.924 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:29:34.924 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.924 [2024-07-15 10:10:48.412476] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:29:34.924 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # [[ '' == '' ]] 00:29:34.924 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:34.925 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.925 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:34.925 [2024-07-15 10:10:48.447342] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:34.925 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.925 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@109 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:29:34.925 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.925 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:34.925 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.925 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:29:34.925 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.925 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:34.925 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.925 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@113 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:29:34.925 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.925 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:34.925 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.925 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:29:34.925 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.925 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:34.925 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.925 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@119 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:29:34.925 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.925 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:34.925 [2024-07-15 10:10:48.507238] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:29:35.184 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.184 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:29:35.184 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.184 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:35.184 [2024-07-15 10:10:48.519207] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:29:35.184 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.184 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # rpc_cmd nvmf_publish_mdns_prr 00:29:35.184 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.184 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:35.184 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.184 10:10:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # sleep 5 00:29:35.751 [2024-07-15 10:10:49.310740] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:29:36.687 [2024-07-15 10:10:49.909617] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:29:36.687 [2024-07-15 10:10:49.909666] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:29:36.687 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:29:36.687 cookie is 0 00:29:36.687 is_local: 1 00:29:36.687 our_own: 0 00:29:36.687 wide_area: 0 00:29:36.687 multicast: 1 00:29:36.687 cached: 1 00:29:36.687 [2024-07-15 10:10:50.009420] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:29:36.687 [2024-07-15 10:10:50.009455] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:29:36.687 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:29:36.687 cookie is 0 00:29:36.687 is_local: 1 00:29:36.687 our_own: 0 00:29:36.687 wide_area: 0 00:29:36.687 multicast: 1 00:29:36.687 cached: 1 00:29:36.687 [2024-07-15 10:10:50.009465] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:29:36.687 [2024-07-15 10:10:50.109223] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:29:36.687 [2024-07-15 10:10:50.109256] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:29:36.687 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:29:36.687 cookie is 0 00:29:36.687 is_local: 1 00:29:36.687 our_own: 0 00:29:36.688 wide_area: 0 00:29:36.688 multicast: 1 00:29:36.688 cached: 1 00:29:36.688 [2024-07-15 10:10:50.209030] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:29:36.688 [2024-07-15 10:10:50.209060] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:29:36.688 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:29:36.688 cookie is 0 00:29:36.688 is_local: 1 00:29:36.688 our_own: 0 00:29:36.688 wide_area: 0 00:29:36.688 multicast: 1 00:29:36.688 cached: 1 00:29:36.688 [2024-07-15 10:10:50.209068] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:29:37.626 [2024-07-15 10:10:50.912147] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:29:37.626 [2024-07-15 10:10:50.912184] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:29:37.626 [2024-07-15 10:10:50.912196] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:29:37.626 [2024-07-15 10:10:50.998084] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:29:37.626 [2024-07-15 10:10:51.054453] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:29:37.626 [2024-07-15 10:10:51.054497] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:29:37.626 [2024-07-15 10:10:51.111755] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:37.626 [2024-07-15 10:10:51.111792] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:37.626 [2024-07-15 10:10:51.111806] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:37.626 [2024-07-15 10:10:51.199720] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:29:37.884 [2024-07-15 10:10:51.262491] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:29:37.884 [2024-07-15 10:10:51.262533] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@133 -- # get_notification_count 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=2 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:40.426 10:10:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@139 -- # sleep 1 00:29:41.363 10:10:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:29:41.363 10:10:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:41.363 10:10:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:41.363 10:10:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:29:41.363 10:10:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:41.363 10:10:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:29:41.363 10:10:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:29:41.621 10:10:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:41.621 10:10:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:29:41.621 10:10:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@142 -- # get_notification_count 00:29:41.621 10:10:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:41.621 10:10:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:29:41.621 10:10:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:41.621 10:10:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:41.621 10:10:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:41.621 10:10:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 00:29:41.621 10:10:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:29:41.621 10:10:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:29:41.621 10:10:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:29:41.621 10:10:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:41.621 10:10:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:41.621 [2024-07-15 10:10:55.036757] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:41.621 [2024-07-15 10:10:55.037595] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:29:41.621 [2024-07-15 10:10:55.037632] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:29:41.621 [2024-07-15 10:10:55.037667] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:29:41.621 [2024-07-15 10:10:55.037678] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:41.621 10:10:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:41.621 10:10:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:29:41.621 10:10:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:41.621 10:10:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:41.621 [2024-07-15 10:10:55.048676] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:29:41.621 [2024-07-15 10:10:55.049556] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:29:41.621 [2024-07-15 10:10:55.049595] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:29:41.621 10:10:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:41.621 10:10:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@149 -- # sleep 1 00:29:41.621 [2024-07-15 10:10:55.180409] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:29:41.621 [2024-07-15 10:10:55.181402] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:29:41.881 [2024-07-15 10:10:55.238587] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:29:41.881 [2024-07-15 10:10:55.238624] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:29:41.881 [2024-07-15 10:10:55.238629] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:29:41.881 [2024-07-15 10:10:55.238644] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:29:41.881 [2024-07-15 10:10:55.239469] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:29:41.881 [2024-07-15 10:10:55.239482] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:41.881 [2024-07-15 10:10:55.239486] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:41.881 [2024-07-15 10:10:55.239496] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:41.881 [2024-07-15 10:10:55.284413] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:29:41.881 [2024-07-15 10:10:55.284439] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:29:41.881 [2024-07-15 10:10:55.285400] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:41.881 [2024-07-15 10:10:55.285412] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:42.820 10:10:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:29:42.820 10:10:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:29:42.820 10:10:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:42.820 10:10:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:29:42.820 10:10:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.820 10:10:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:42.820 10:10:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:29:42.820 10:10:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.820 10:10:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:29:42.820 10:10:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:29:42.820 10:10:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:29:42.820 10:10:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:42.820 10:10:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.820 10:10:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:42.820 10:10:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:29:42.820 10:10:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:29:42.820 10:10:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.820 10:10:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:29:42.820 10:10:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:29:42.820 10:10:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:42.820 10:10:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:29:42.820 10:10:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:29:42.820 10:10:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:29:42.820 10:10:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.820 10:10:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:42.820 10:10:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.820 10:10:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:29:42.820 10:10:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:29:42.820 10:10:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:29:42.820 10:10:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.820 10:10:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:42.820 10:10:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:42.820 10:10:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:29:42.820 10:10:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:29:42.820 10:10:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.820 10:10:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:29:42.820 10:10:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@155 -- # get_notification_count 00:29:42.820 10:10:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:29:42.820 10:10:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:29:42.820 10:10:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.820 10:10:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:42.820 10:10:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.820 10:10:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 00:29:42.820 10:10:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:29:42.820 10:10:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:29:42.820 10:10:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:42.820 10:10:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.820 10:10:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:42.820 [2024-07-15 10:10:56.315138] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:29:42.820 [2024-07-15 10:10:56.315177] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:29:42.820 [2024-07-15 10:10:56.315203] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:29:42.820 [2024-07-15 10:10:56.315213] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:42.820 10:10:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.820 10:10:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:29:42.820 10:10:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.820 10:10:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:42.820 [2024-07-15 10:10:56.321876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.820 [2024-07-15 10:10:56.321911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.820 [2024-07-15 10:10:56.321920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.820 [2024-07-15 10:10:56.321926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.820 [2024-07-15 10:10:56.321932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.820 [2024-07-15 10:10:56.321937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.820 [2024-07-15 10:10:56.321944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.820 [2024-07-15 10:10:56.321949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.820 [2024-07-15 10:10:56.321955] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2540350 is same with the state(5) to be set 00:29:42.820 [2024-07-15 10:10:56.327128] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:29:42.820 [2024-07-15 10:10:56.327171] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:29:42.820 10:10:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.820 [2024-07-15 10:10:56.331819] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2540350 (9): Bad file descriptor 00:29:42.820 10:10:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # sleep 1 00:29:42.821 [2024-07-15 10:10:56.334136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.821 [2024-07-15 10:10:56.334162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.821 [2024-07-15 10:10:56.334170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.821 [2024-07-15 10:10:56.334176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.821 [2024-07-15 10:10:56.334182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.821 [2024-07-15 10:10:56.334187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.821 [2024-07-15 10:10:56.334193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.821 [2024-07-15 10:10:56.334199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.821 [2024-07-15 10:10:56.334204] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f9230 is same with the state(5) to be set 00:29:42.821 [2024-07-15 10:10:56.341817] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:42.821 [2024-07-15 10:10:56.341917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.821 [2024-07-15 10:10:56.341935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2540350 with addr=10.0.0.2, port=4420 00:29:42.821 [2024-07-15 10:10:56.341942] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2540350 is same with the state(5) to be set 00:29:42.821 [2024-07-15 10:10:56.341955] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2540350 (9): Bad file descriptor 00:29:42.821 [2024-07-15 10:10:56.341966] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:42.821 [2024-07-15 10:10:56.341972] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:42.821 [2024-07-15 10:10:56.341978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:42.821 [2024-07-15 10:10:56.341990] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.821 [2024-07-15 10:10:56.344082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24f9230 (9): Bad file descriptor 00:29:42.821 [2024-07-15 10:10:56.351848] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:42.821 [2024-07-15 10:10:56.351939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.821 [2024-07-15 10:10:56.351952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2540350 with addr=10.0.0.2, port=4420 00:29:42.821 [2024-07-15 10:10:56.351958] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2540350 is same with the state(5) to be set 00:29:42.821 [2024-07-15 10:10:56.351968] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2540350 (9): Bad file descriptor 00:29:42.821 [2024-07-15 10:10:56.351976] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:42.821 [2024-07-15 10:10:56.351981] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:42.821 [2024-07-15 10:10:56.351988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:42.821 [2024-07-15 10:10:56.351996] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.821 [2024-07-15 10:10:56.354070] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:29:42.821 [2024-07-15 10:10:56.354123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.821 [2024-07-15 10:10:56.354133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24f9230 with addr=10.0.0.3, port=4420 00:29:42.821 [2024-07-15 10:10:56.354139] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f9230 is same with the state(5) to be set 00:29:42.821 [2024-07-15 10:10:56.354147] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24f9230 (9): Bad file descriptor 00:29:42.821 [2024-07-15 10:10:56.354155] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:29:42.821 [2024-07-15 10:10:56.354160] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:29:42.821 [2024-07-15 10:10:56.354165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:29:42.821 [2024-07-15 10:10:56.354173] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.821 [2024-07-15 10:10:56.361869] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:42.821 [2024-07-15 10:10:56.361922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.821 [2024-07-15 10:10:56.361932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2540350 with addr=10.0.0.2, port=4420 00:29:42.821 [2024-07-15 10:10:56.361938] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2540350 is same with the state(5) to be set 00:29:42.821 [2024-07-15 10:10:56.361946] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2540350 (9): Bad file descriptor 00:29:42.821 [2024-07-15 10:10:56.361954] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:42.821 [2024-07-15 10:10:56.361959] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:42.821 [2024-07-15 10:10:56.361964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:42.821 [2024-07-15 10:10:56.361972] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.821 [2024-07-15 10:10:56.364084] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:29:42.821 [2024-07-15 10:10:56.364131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.821 [2024-07-15 10:10:56.364141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24f9230 with addr=10.0.0.3, port=4420 00:29:42.821 [2024-07-15 10:10:56.364147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f9230 is same with the state(5) to be set 00:29:42.821 [2024-07-15 10:10:56.364156] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24f9230 (9): Bad file descriptor 00:29:42.821 [2024-07-15 10:10:56.364164] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:29:42.821 [2024-07-15 10:10:56.364169] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:29:42.821 [2024-07-15 10:10:56.364174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:29:42.821 [2024-07-15 10:10:56.364182] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.821 [2024-07-15 10:10:56.371887] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:42.821 [2024-07-15 10:10:56.371956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.821 [2024-07-15 10:10:56.371968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2540350 with addr=10.0.0.2, port=4420 00:29:42.821 [2024-07-15 10:10:56.371974] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2540350 is same with the state(5) to be set 00:29:42.821 [2024-07-15 10:10:56.371984] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2540350 (9): Bad file descriptor 00:29:42.821 [2024-07-15 10:10:56.371993] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:42.821 [2024-07-15 10:10:56.371998] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:42.821 [2024-07-15 10:10:56.372004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:42.821 [2024-07-15 10:10:56.372013] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.821 [2024-07-15 10:10:56.374098] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:29:42.821 [2024-07-15 10:10:56.374159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.821 [2024-07-15 10:10:56.374171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24f9230 with addr=10.0.0.3, port=4420 00:29:42.821 [2024-07-15 10:10:56.374177] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f9230 is same with the state(5) to be set 00:29:42.821 [2024-07-15 10:10:56.374187] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24f9230 (9): Bad file descriptor 00:29:42.821 [2024-07-15 10:10:56.374195] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:29:42.821 [2024-07-15 10:10:56.374200] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:29:42.821 [2024-07-15 10:10:56.374205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:29:42.821 [2024-07-15 10:10:56.374214] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.821 [2024-07-15 10:10:56.381912] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:42.821 [2024-07-15 10:10:56.381981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.821 [2024-07-15 10:10:56.381992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2540350 with addr=10.0.0.2, port=4420 00:29:42.821 [2024-07-15 10:10:56.381998] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2540350 is same with the state(5) to be set 00:29:42.821 [2024-07-15 10:10:56.382008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2540350 (9): Bad file descriptor 00:29:42.821 [2024-07-15 10:10:56.382024] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:42.821 [2024-07-15 10:10:56.382030] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:42.821 [2024-07-15 10:10:56.382035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:42.821 [2024-07-15 10:10:56.382044] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.821 [2024-07-15 10:10:56.384119] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:29:42.821 [2024-07-15 10:10:56.384167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.821 [2024-07-15 10:10:56.384177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24f9230 with addr=10.0.0.3, port=4420 00:29:42.821 [2024-07-15 10:10:56.384183] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f9230 is same with the state(5) to be set 00:29:42.821 [2024-07-15 10:10:56.384191] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24f9230 (9): Bad file descriptor 00:29:42.821 [2024-07-15 10:10:56.384200] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:29:42.822 [2024-07-15 10:10:56.384205] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:29:42.822 [2024-07-15 10:10:56.384210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:29:42.822 [2024-07-15 10:10:56.384218] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.822 [2024-07-15 10:10:56.391927] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:42.822 [2024-07-15 10:10:56.392004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.822 [2024-07-15 10:10:56.392016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2540350 with addr=10.0.0.2, port=4420 00:29:42.822 [2024-07-15 10:10:56.392022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2540350 is same with the state(5) to be set 00:29:42.822 [2024-07-15 10:10:56.392030] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2540350 (9): Bad file descriptor 00:29:42.822 [2024-07-15 10:10:56.392048] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:42.822 [2024-07-15 10:10:56.392053] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:42.822 [2024-07-15 10:10:56.392059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:42.822 [2024-07-15 10:10:56.392067] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.822 [2024-07-15 10:10:56.394131] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:29:42.822 [2024-07-15 10:10:56.394197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.822 [2024-07-15 10:10:56.394207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24f9230 with addr=10.0.0.3, port=4420 00:29:42.822 [2024-07-15 10:10:56.394213] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f9230 is same with the state(5) to be set 00:29:42.822 [2024-07-15 10:10:56.394222] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24f9230 (9): Bad file descriptor 00:29:42.822 [2024-07-15 10:10:56.394230] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:29:42.822 [2024-07-15 10:10:56.394235] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:29:42.822 [2024-07-15 10:10:56.394240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:29:42.822 [2024-07-15 10:10:56.394248] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.822 [2024-07-15 10:10:56.401948] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:42.822 [2024-07-15 10:10:56.402003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.822 [2024-07-15 10:10:56.402014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2540350 with addr=10.0.0.2, port=4420 00:29:42.822 [2024-07-15 10:10:56.402020] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2540350 is same with the state(5) to be set 00:29:42.822 [2024-07-15 10:10:56.402029] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2540350 (9): Bad file descriptor 00:29:42.822 [2024-07-15 10:10:56.402047] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:42.822 [2024-07-15 10:10:56.402053] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:42.822 [2024-07-15 10:10:56.402059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:42.822 [2024-07-15 10:10:56.402068] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.083 [2024-07-15 10:10:56.404144] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:29:43.083 [2024-07-15 10:10:56.404193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.083 [2024-07-15 10:10:56.404203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24f9230 with addr=10.0.0.3, port=4420 00:29:43.083 [2024-07-15 10:10:56.404209] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f9230 is same with the state(5) to be set 00:29:43.083 [2024-07-15 10:10:56.404218] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24f9230 (9): Bad file descriptor 00:29:43.083 [2024-07-15 10:10:56.404226] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:29:43.083 [2024-07-15 10:10:56.404231] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:29:43.083 [2024-07-15 10:10:56.404236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:29:43.083 [2024-07-15 10:10:56.404244] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.083 [2024-07-15 10:10:56.411965] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:43.083 [2024-07-15 10:10:56.412023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.083 [2024-07-15 10:10:56.412033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2540350 with addr=10.0.0.2, port=4420 00:29:43.083 [2024-07-15 10:10:56.412039] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2540350 is same with the state(5) to be set 00:29:43.083 [2024-07-15 10:10:56.412048] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2540350 (9): Bad file descriptor 00:29:43.083 [2024-07-15 10:10:56.412065] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:43.083 [2024-07-15 10:10:56.412070] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:43.083 [2024-07-15 10:10:56.412076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:43.083 [2024-07-15 10:10:56.412084] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.083 [2024-07-15 10:10:56.414159] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:29:43.083 [2024-07-15 10:10:56.414219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.083 [2024-07-15 10:10:56.414231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24f9230 with addr=10.0.0.3, port=4420 00:29:43.083 [2024-07-15 10:10:56.414237] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f9230 is same with the state(5) to be set 00:29:43.083 [2024-07-15 10:10:56.414247] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24f9230 (9): Bad file descriptor 00:29:43.083 [2024-07-15 10:10:56.414255] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:29:43.083 [2024-07-15 10:10:56.414260] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:29:43.083 [2024-07-15 10:10:56.414265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:29:43.083 [2024-07-15 10:10:56.414274] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.083 [2024-07-15 10:10:56.421988] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:43.083 [2024-07-15 10:10:56.422063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.083 [2024-07-15 10:10:56.422075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2540350 with addr=10.0.0.2, port=4420 00:29:43.083 [2024-07-15 10:10:56.422081] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2540350 is same with the state(5) to be set 00:29:43.083 [2024-07-15 10:10:56.422091] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2540350 (9): Bad file descriptor 00:29:43.083 [2024-07-15 10:10:56.422108] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:43.083 [2024-07-15 10:10:56.422114] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:43.083 [2024-07-15 10:10:56.422119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:43.083 [2024-07-15 10:10:56.422128] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.083 [2024-07-15 10:10:56.424178] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:29:43.083 [2024-07-15 10:10:56.424225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.083 [2024-07-15 10:10:56.424234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24f9230 with addr=10.0.0.3, port=4420 00:29:43.083 [2024-07-15 10:10:56.424240] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f9230 is same with the state(5) to be set 00:29:43.083 [2024-07-15 10:10:56.424250] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24f9230 (9): Bad file descriptor 00:29:43.083 [2024-07-15 10:10:56.424257] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:29:43.083 [2024-07-15 10:10:56.424262] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:29:43.083 [2024-07-15 10:10:56.424267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:29:43.083 [2024-07-15 10:10:56.424275] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.083 [2024-07-15 10:10:56.432014] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:43.083 [2024-07-15 10:10:56.432072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.083 [2024-07-15 10:10:56.432083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2540350 with addr=10.0.0.2, port=4420 00:29:43.083 [2024-07-15 10:10:56.432089] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2540350 is same with the state(5) to be set 00:29:43.083 [2024-07-15 10:10:56.432098] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2540350 (9): Bad file descriptor 00:29:43.083 [2024-07-15 10:10:56.432115] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:43.083 [2024-07-15 10:10:56.432120] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:43.083 [2024-07-15 10:10:56.432126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:43.083 [2024-07-15 10:10:56.432134] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.083 [2024-07-15 10:10:56.434191] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:29:43.083 [2024-07-15 10:10:56.434298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.083 [2024-07-15 10:10:56.434312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24f9230 with addr=10.0.0.3, port=4420 00:29:43.083 [2024-07-15 10:10:56.434319] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f9230 is same with the state(5) to be set 00:29:43.083 [2024-07-15 10:10:56.434329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24f9230 (9): Bad file descriptor 00:29:43.083 [2024-07-15 10:10:56.434338] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:29:43.083 [2024-07-15 10:10:56.434342] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:29:43.083 [2024-07-15 10:10:56.434348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:29:43.083 [2024-07-15 10:10:56.434357] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.083 [2024-07-15 10:10:56.442035] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:43.083 [2024-07-15 10:10:56.442106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.083 [2024-07-15 10:10:56.442117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2540350 with addr=10.0.0.2, port=4420 00:29:43.083 [2024-07-15 10:10:56.442139] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2540350 is same with the state(5) to be set 00:29:43.083 [2024-07-15 10:10:56.442149] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2540350 (9): Bad file descriptor 00:29:43.083 [2024-07-15 10:10:56.442166] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:43.083 [2024-07-15 10:10:56.442171] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:43.083 [2024-07-15 10:10:56.442177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:43.083 [2024-07-15 10:10:56.442186] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.083 [2024-07-15 10:10:56.444241] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:29:43.083 [2024-07-15 10:10:56.444288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.083 [2024-07-15 10:10:56.444298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24f9230 with addr=10.0.0.3, port=4420 00:29:43.083 [2024-07-15 10:10:56.444304] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f9230 is same with the state(5) to be set 00:29:43.083 [2024-07-15 10:10:56.444312] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24f9230 (9): Bad file descriptor 00:29:43.083 [2024-07-15 10:10:56.444319] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:29:43.083 [2024-07-15 10:10:56.444324] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:29:43.083 [2024-07-15 10:10:56.444329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:29:43.083 [2024-07-15 10:10:56.444336] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.083 [2024-07-15 10:10:56.452059] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:43.083 [2024-07-15 10:10:56.452113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.083 [2024-07-15 10:10:56.452124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2540350 with addr=10.0.0.2, port=4420 00:29:43.083 [2024-07-15 10:10:56.452130] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2540350 is same with the state(5) to be set 00:29:43.083 [2024-07-15 10:10:56.452138] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2540350 (9): Bad file descriptor 00:29:43.083 [2024-07-15 10:10:56.452154] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:43.083 [2024-07-15 10:10:56.452159] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:43.083 [2024-07-15 10:10:56.452164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:43.083 [2024-07-15 10:10:56.452172] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.083 [2024-07-15 10:10:56.454253] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:29:43.083 [2024-07-15 10:10:56.454302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.083 [2024-07-15 10:10:56.454312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24f9230 with addr=10.0.0.3, port=4420 00:29:43.083 [2024-07-15 10:10:56.454318] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f9230 is same with the state(5) to be set 00:29:43.083 [2024-07-15 10:10:56.454326] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24f9230 (9): Bad file descriptor 00:29:43.083 [2024-07-15 10:10:56.454334] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:29:43.084 [2024-07-15 10:10:56.454338] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:29:43.084 [2024-07-15 10:10:56.454343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:29:43.084 [2024-07-15 10:10:56.454351] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.084 [2024-07-15 10:10:56.457166] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:29:43.084 [2024-07-15 10:10:56.457190] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:29:43.084 [2024-07-15 10:10:56.457241] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:29:43.084 [2024-07-15 10:10:56.458174] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:29:43.084 [2024-07-15 10:10:56.458195] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:43.084 [2024-07-15 10:10:56.458207] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:43.084 [2024-07-15 10:10:56.543070] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:29:43.084 [2024-07-15 10:10:56.544048] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:44.021 10:10:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:29:44.021 10:10:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:44.021 10:10:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:29:44.021 10:10:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.021 10:10:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:29:44.021 10:10:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:44.021 10:10:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:29:44.021 10:10:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.021 10:10:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:29:44.021 10:10:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:29:44.021 10:10:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:44.021 10:10:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.022 10:10:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:44.022 10:10:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:29:44.022 10:10:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:29:44.022 10:10:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:29:44.022 10:10:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.022 10:10:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:29:44.022 10:10:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:29:44.022 10:10:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:29:44.022 10:10:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.022 10:10:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:44.022 10:10:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:44.022 10:10:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:29:44.022 10:10:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:29:44.022 10:10:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.022 10:10:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:29:44.022 10:10:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:29:44.022 10:10:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:29:44.022 10:10:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:44.022 10:10:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:29:44.022 10:10:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.022 10:10:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:44.022 10:10:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:29:44.022 10:10:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.022 10:10:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:29:44.022 10:10:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@168 -- # get_notification_count 00:29:44.022 10:10:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:29:44.022 10:10:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.022 10:10:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:44.022 10:10:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:29:44.022 10:10:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.022 10:10:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 00:29:44.022 10:10:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:29:44.022 10:10:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:29:44.022 10:10:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:29:44.022 10:10:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.022 10:10:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:44.281 10:10:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.281 10:10:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@172 -- # sleep 1 00:29:44.281 [2024-07-15 10:10:57.694630] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:29:45.218 10:10:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:29:45.218 10:10:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:29:45.218 10:10:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:29:45.218 10:10:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:45.218 10:10:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:29:45.218 10:10:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:45.218 10:10:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:29:45.218 10:10:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:45.218 10:10:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:29:45.218 10:10:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:29:45.218 10:10:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:45.218 10:10:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:45.218 10:10:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:45.218 10:10:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:29:45.218 10:10:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:29:45.218 10:10:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:29:45.218 10:10:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:45.218 10:10:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:29:45.218 10:10:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:29:45.218 10:10:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:45.218 10:10:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:45.218 10:10:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:45.218 10:10:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:29:45.218 10:10:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:29:45.218 10:10:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:29:45.218 10:10:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:45.218 10:10:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:29:45.218 10:10:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@177 -- # get_notification_count 00:29:45.218 10:10:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:29:45.218 10:10:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:45.218 10:10:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:45.218 10:10:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:29:45.478 10:10:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:45.478 10:10:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=4 00:29:45.478 10:10:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=8 00:29:45.478 10:10:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:29:45.478 10:10:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:29:45.478 10:10:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:45.478 10:10:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:45.478 10:10:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:45.478 10:10:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:29:45.478 10:10:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@648 -- # local es=0 00:29:45.478 10:10:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:29:45.478 10:10:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:29:45.478 10:10:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:45.478 10:10:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:29:45.478 10:10:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:45.478 10:10:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:29:45.478 10:10:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:45.478 10:10:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:45.478 [2024-07-15 10:10:58.848453] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:29:45.478 2024/07/15 10:10:58 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:29:45.478 request: 00:29:45.478 { 00:29:45.478 "method": "bdev_nvme_start_mdns_discovery", 00:29:45.478 "params": { 00:29:45.478 "name": "mdns", 00:29:45.478 "svcname": "_nvme-disc._http", 00:29:45.478 "hostnqn": "nqn.2021-12.io.spdk:test" 00:29:45.478 } 00:29:45.478 } 00:29:45.478 Got JSON-RPC error response 00:29:45.478 GoRPCClient: error on JSON-RPC call 00:29:45.478 10:10:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:29:45.478 10:10:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # es=1 00:29:45.478 10:10:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:45.478 10:10:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:45.478 10:10:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:45.478 10:10:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@183 -- # sleep 5 00:29:46.046 [2024-07-15 10:10:59.432215] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:29:46.046 [2024-07-15 10:10:59.532018] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:29:46.306 [2024-07-15 10:10:59.631850] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:29:46.306 [2024-07-15 10:10:59.631992] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:29:46.306 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:29:46.306 cookie is 0 00:29:46.306 is_local: 1 00:29:46.306 our_own: 0 00:29:46.306 wide_area: 0 00:29:46.306 multicast: 1 00:29:46.306 cached: 1 00:29:46.306 [2024-07-15 10:10:59.731651] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:29:46.306 [2024-07-15 10:10:59.731776] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:29:46.306 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:29:46.306 cookie is 0 00:29:46.306 is_local: 1 00:29:46.306 our_own: 0 00:29:46.306 wide_area: 0 00:29:46.306 multicast: 1 00:29:46.306 cached: 1 00:29:46.306 [2024-07-15 10:10:59.731818] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:29:46.306 [2024-07-15 10:10:59.831456] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:29:46.306 [2024-07-15 10:10:59.831569] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:29:46.306 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:29:46.306 cookie is 0 00:29:46.306 is_local: 1 00:29:46.306 our_own: 0 00:29:46.306 wide_area: 0 00:29:46.306 multicast: 1 00:29:46.306 cached: 1 00:29:46.598 [2024-07-15 10:10:59.931267] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:29:46.598 [2024-07-15 10:10:59.931387] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:29:46.598 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:29:46.598 cookie is 0 00:29:46.598 is_local: 1 00:29:46.598 our_own: 0 00:29:46.598 wide_area: 0 00:29:46.598 multicast: 1 00:29:46.598 cached: 1 00:29:46.598 [2024-07-15 10:10:59.931430] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:29:47.173 [2024-07-15 10:11:00.634725] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:29:47.173 [2024-07-15 10:11:00.634832] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:29:47.173 [2024-07-15 10:11:00.634864] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:29:47.173 [2024-07-15 10:11:00.722677] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:29:47.431 [2024-07-15 10:11:00.789684] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:29:47.431 [2024-07-15 10:11:00.789806] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:29:47.431 [2024-07-15 10:11:00.834231] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:47.431 [2024-07-15 10:11:00.834337] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:47.431 [2024-07-15 10:11:00.834366] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:47.431 [2024-07-15 10:11:00.920177] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:29:47.431 [2024-07-15 10:11:00.980157] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:29:47.431 [2024-07-15 10:11:00.980291] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:50.716 10:11:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:29:50.716 10:11:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:29:50.716 10:11:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:29:50.716 10:11:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:29:50.716 10:11:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:29:50.716 10:11:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.716 10:11:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:50.716 10:11:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.716 10:11:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:29:50.716 10:11:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:29:50.716 10:11:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:50.716 10:11:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:29:50.716 10:11:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:29:50.716 10:11:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:29:50.716 10:11:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.716 10:11:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:50.716 10:11:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.716 10:11:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:29:50.716 10:11:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:29:50.716 10:11:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:50.716 10:11:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.716 10:11:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:29:50.716 10:11:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:50.716 10:11:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:29:50.716 10:11:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:29:50.716 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.716 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:29:50.716 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:29:50.716 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@648 -- # local es=0 00:29:50.716 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:29:50.716 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:29:50.716 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:50.716 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:29:50.716 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:50.716 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:29:50.716 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.716 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:50.716 [2024-07-15 10:11:04.032747] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:29:50.716 2024/07/15 10:11:04 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:29:50.716 request: 00:29:50.716 { 00:29:50.716 "method": "bdev_nvme_start_mdns_discovery", 00:29:50.716 "params": { 00:29:50.716 "name": "cdc", 00:29:50.716 "svcname": "_nvme-disc._tcp", 00:29:50.716 "hostnqn": "nqn.2021-12.io.spdk:test" 00:29:50.716 } 00:29:50.716 } 00:29:50.716 Got JSON-RPC error response 00:29:50.716 GoRPCClient: error on JSON-RPC call 00:29:50.716 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:29:50.716 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # es=1 00:29:50.716 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:50.716 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:50.716 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:50.716 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:29:50.716 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:29:50.716 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:29:50.716 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:50.716 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.716 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:50.716 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:29:50.716 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.716 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:29:50.716 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:29:50.716 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:50.716 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.716 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:50.716 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:29:50.716 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:29:50.716 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:29:50.716 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.716 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:29:50.716 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:29:50.716 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.716 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:50.716 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.716 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@195 -- # rpc_cmd nvmf_stop_mdns_prr 00:29:50.716 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.716 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:50.716 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.716 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@197 -- # trap - SIGINT SIGTERM EXIT 00:29:50.716 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # kill 94068 00:29:50.716 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # wait 94068 00:29:50.716 [2024-07-15 10:11:04.255114] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:29:50.975 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@203 -- # kill 94097 00:29:50.975 Got SIGTERM, quitting. 00:29:50.975 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@204 -- # nvmftestfini 00:29:50.975 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:50.975 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:29:50.975 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@117 -- # sync 00:29:50.975 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:29:50.975 avahi-daemon 0.8 exiting. 00:29:50.975 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:50.975 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@120 -- # set +e 00:29:50.975 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:50.975 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:50.975 rmmod nvme_tcp 00:29:50.975 rmmod nvme_fabrics 00:29:50.975 rmmod nvme_keyring 00:29:50.975 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:50.975 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@124 -- # set -e 00:29:50.975 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@125 -- # return 0 00:29:50.975 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@489 -- # '[' -n 94011 ']' 00:29:50.975 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@490 -- # killprocess 94011 00:29:50.975 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@948 -- # '[' -z 94011 ']' 00:29:50.975 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@952 -- # kill -0 94011 00:29:50.975 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@953 -- # uname 00:29:50.975 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:50.975 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94011 00:29:50.975 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:50.975 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:50.975 killing process with pid 94011 00:29:50.975 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94011' 00:29:50.975 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@967 -- # kill 94011 00:29:50.975 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@972 -- # wait 94011 00:29:51.235 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:51.235 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:51.235 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:51.235 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:51.235 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:51.235 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:51.235 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:51.235 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:51.235 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:29:51.235 00:29:51.235 real 0m20.350s 00:29:51.235 user 0m39.725s 00:29:51.235 sys 0m1.934s 00:29:51.235 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:51.235 10:11:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.235 ************************************ 00:29:51.235 END TEST nvmf_mdns_discovery 00:29:51.235 ************************************ 00:29:51.495 10:11:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:51.495 10:11:04 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 1 -eq 1 ]] 00:29:51.495 10:11:04 nvmf_tcp -- nvmf/nvmf.sh@117 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:29:51.495 10:11:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:51.495 10:11:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:51.495 10:11:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:51.495 ************************************ 00:29:51.495 START TEST nvmf_host_multipath 00:29:51.495 ************************************ 00:29:51.495 10:11:04 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:29:51.495 * Looking for test storage... 00:29:51.495 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:29:51.495 10:11:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:51.495 10:11:04 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:29:51.495 10:11:04 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:51.495 10:11:04 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:51.495 10:11:04 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:51.495 10:11:04 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:51.495 10:11:04 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:51.495 10:11:04 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:51.495 10:11:04 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:51.495 10:11:04 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:51.495 10:11:04 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:51.495 10:11:04 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:51.495 10:11:04 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:29:51.495 10:11:04 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:29:51.495 10:11:04 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:51.495 10:11:04 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:51.495 10:11:04 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:51.495 10:11:04 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:51.495 10:11:04 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:51.495 10:11:05 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:51.495 10:11:05 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:51.495 10:11:05 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:51.495 10:11:05 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.495 10:11:05 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.495 10:11:05 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.495 10:11:05 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:29:51.495 10:11:05 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.495 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:29:51.495 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:51.495 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:51.495 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:51.495 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:51.495 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:51.495 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:51.495 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:51.495 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:51.495 10:11:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:51.495 10:11:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:51.495 10:11:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:51.495 10:11:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:29:51.495 10:11:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:51.495 10:11:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:29:51.495 10:11:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:29:51.495 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:51.495 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:51.495 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:51.495 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:51.495 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:51.495 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:51.495 10:11:05 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:51.495 10:11:05 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:51.495 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:29:51.495 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:29:51.495 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:29:51.495 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:29:51.495 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:29:51.495 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:29:51.495 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:51.495 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:51.495 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:29:51.495 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:29:51.495 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:51.495 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:51.495 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:51.495 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:51.495 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:51.495 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:51.495 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:51.495 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:51.495 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:29:51.495 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:29:51.495 Cannot find device "nvmf_tgt_br" 00:29:51.495 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:29:51.495 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:29:51.495 Cannot find device "nvmf_tgt_br2" 00:29:51.495 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:29:51.495 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:29:51.755 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:29:51.755 Cannot find device "nvmf_tgt_br" 00:29:51.755 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:29:51.755 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:29:51.755 Cannot find device "nvmf_tgt_br2" 00:29:51.755 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:29:51.755 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:29:51.755 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:29:51.755 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:51.755 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:51.755 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:29:51.755 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:51.755 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:51.755 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:29:51.755 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:29:51.755 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:51.755 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:51.755 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:51.755 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:51.755 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:51.755 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:51.755 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:29:51.755 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:29:51.755 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:29:51.755 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:29:51.755 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:29:51.755 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:29:51.755 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:51.755 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:51.755 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:51.755 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:29:51.755 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:29:51.755 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:29:52.014 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:52.014 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:52.014 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:52.014 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:52.014 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:29:52.014 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:52.014 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:29:52.014 00:29:52.014 --- 10.0.0.2 ping statistics --- 00:29:52.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:52.014 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:29:52.014 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:29:52.014 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:52.014 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:29:52.014 00:29:52.014 --- 10.0.0.3 ping statistics --- 00:29:52.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:52.014 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:29:52.014 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:52.014 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:52.014 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:29:52.014 00:29:52.014 --- 10.0.0.1 ping statistics --- 00:29:52.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:52.014 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:29:52.014 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:52.014 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:29:52.014 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:52.014 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:52.014 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:52.014 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:52.014 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:52.014 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:52.014 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:52.014 10:11:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:29:52.014 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:52.014 10:11:05 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:52.014 10:11:05 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:29:52.014 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=94664 00:29:52.014 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 94664 00:29:52.014 10:11:05 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 94664 ']' 00:29:52.014 10:11:05 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:52.014 10:11:05 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:52.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:52.014 10:11:05 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:52.014 10:11:05 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:52.014 10:11:05 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:29:52.014 10:11:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:29:52.014 [2024-07-15 10:11:05.479459] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:29:52.014 [2024-07-15 10:11:05.479523] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:52.273 [2024-07-15 10:11:05.620515] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:52.273 [2024-07-15 10:11:05.726277] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:52.273 [2024-07-15 10:11:05.726320] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:52.273 [2024-07-15 10:11:05.726343] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:52.273 [2024-07-15 10:11:05.726350] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:52.273 [2024-07-15 10:11:05.726355] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:52.273 [2024-07-15 10:11:05.726558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:52.273 [2024-07-15 10:11:05.726558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:52.842 10:11:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:52.842 10:11:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:29:52.842 10:11:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:52.842 10:11:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:52.842 10:11:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:29:52.842 10:11:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:52.842 10:11:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=94664 00:29:52.842 10:11:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:53.102 [2024-07-15 10:11:06.588531] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:53.102 10:11:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:29:53.362 Malloc0 00:29:53.362 10:11:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:29:53.621 10:11:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:53.881 10:11:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:53.881 [2024-07-15 10:11:07.397283] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:53.881 10:11:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:54.141 [2024-07-15 10:11:07.576976] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:54.141 10:11:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:29:54.141 10:11:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=94762 00:29:54.141 10:11:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:54.141 10:11:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 94762 /var/tmp/bdevperf.sock 00:29:54.141 10:11:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 94762 ']' 00:29:54.141 10:11:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:54.141 10:11:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:54.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:54.141 10:11:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:54.141 10:11:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:54.141 10:11:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:29:55.100 10:11:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:55.100 10:11:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:29:55.100 10:11:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:29:55.359 10:11:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:29:55.618 Nvme0n1 00:29:55.618 10:11:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:29:55.878 Nvme0n1 00:29:55.878 10:11:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:29:55.878 10:11:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:29:57.257 10:11:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:29:57.257 10:11:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:29:57.258 10:11:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:29:57.517 10:11:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:29:57.517 10:11:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94664 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:29:57.517 10:11:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=94850 00:29:57.517 10:11:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:30:04.097 10:11:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:30:04.097 10:11:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:30:04.097 10:11:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:30:04.097 10:11:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:04.097 Attaching 4 probes... 00:30:04.097 @path[10.0.0.2, 4421]: 21025 00:30:04.097 @path[10.0.0.2, 4421]: 21436 00:30:04.097 @path[10.0.0.2, 4421]: 21517 00:30:04.097 @path[10.0.0.2, 4421]: 22070 00:30:04.097 @path[10.0.0.2, 4421]: 21499 00:30:04.097 10:11:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:30:04.097 10:11:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:30:04.097 10:11:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:30:04.097 10:11:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:30:04.097 10:11:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:30:04.097 10:11:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:30:04.097 10:11:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 94850 00:30:04.097 10:11:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:04.097 10:11:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:30:04.097 10:11:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:04.098 10:11:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:30:04.098 10:11:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:30:04.098 10:11:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94664 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:30:04.098 10:11:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=94981 00:30:04.098 10:11:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:30:10.665 10:11:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:30:10.665 10:11:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:30:10.665 10:11:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:30:10.665 10:11:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:10.665 Attaching 4 probes... 00:30:10.665 @path[10.0.0.2, 4420]: 17994 00:30:10.665 @path[10.0.0.2, 4420]: 22093 00:30:10.665 @path[10.0.0.2, 4420]: 23275 00:30:10.665 @path[10.0.0.2, 4420]: 23485 00:30:10.665 @path[10.0.0.2, 4420]: 23354 00:30:10.665 10:11:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:30:10.665 10:11:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:30:10.665 10:11:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:30:10.665 10:11:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:30:10.665 10:11:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:30:10.665 10:11:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:30:10.665 10:11:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 94981 00:30:10.665 10:11:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:10.665 10:11:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:30:10.665 10:11:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:30:10.666 10:11:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:10.666 10:11:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:30:10.666 10:11:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94664 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:30:10.666 10:11:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95111 00:30:10.666 10:11:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:30:17.236 10:11:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:30:17.236 10:11:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:30:17.236 10:11:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:30:17.236 10:11:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:17.236 Attaching 4 probes... 00:30:17.236 @path[10.0.0.2, 4421]: 17336 00:30:17.236 @path[10.0.0.2, 4421]: 22878 00:30:17.236 @path[10.0.0.2, 4421]: 22483 00:30:17.236 @path[10.0.0.2, 4421]: 21639 00:30:17.236 @path[10.0.0.2, 4421]: 21760 00:30:17.236 10:11:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:30:17.236 10:11:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:30:17.236 10:11:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:30:17.236 10:11:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:30:17.236 10:11:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:30:17.236 10:11:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:30:17.236 10:11:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95111 00:30:17.236 10:11:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:17.236 10:11:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:30:17.236 10:11:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:30:17.236 10:11:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:30:17.236 10:11:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:30:17.236 10:11:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94664 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:30:17.236 10:11:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95247 00:30:17.236 10:11:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:30:23.889 10:11:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:30:23.889 10:11:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:30:23.889 10:11:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:30:23.889 10:11:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:23.889 Attaching 4 probes... 00:30:23.889 00:30:23.889 00:30:23.889 00:30:23.889 00:30:23.889 00:30:23.889 10:11:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:30:23.889 10:11:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:30:23.889 10:11:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:30:23.889 10:11:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:30:23.889 10:11:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:30:23.889 10:11:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:30:23.889 10:11:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95247 00:30:23.889 10:11:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:23.889 10:11:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:30:23.889 10:11:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:23.889 10:11:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:23.889 10:11:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:30:23.889 10:11:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95372 00:30:23.889 10:11:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94664 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:30:23.889 10:11:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:30:30.458 10:11:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:30:30.458 10:11:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:30:30.458 10:11:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:30:30.458 10:11:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:30.458 Attaching 4 probes... 00:30:30.458 @path[10.0.0.2, 4421]: 21053 00:30:30.458 @path[10.0.0.2, 4421]: 20854 00:30:30.458 @path[10.0.0.2, 4421]: 21411 00:30:30.458 @path[10.0.0.2, 4421]: 21215 00:30:30.458 @path[10.0.0.2, 4421]: 21352 00:30:30.458 10:11:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:30:30.458 10:11:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:30:30.458 10:11:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:30:30.458 10:11:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:30:30.458 10:11:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:30:30.458 10:11:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:30:30.458 10:11:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95372 00:30:30.458 10:11:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:30.458 10:11:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:30.458 [2024-07-15 10:11:43.898720] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a20310 is same with the state(5) to be set 00:30:30.458 [2024-07-15 10:11:43.898770] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a20310 is same with the state(5) to be set 00:30:30.458 [2024-07-15 10:11:43.898778] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a20310 is same with the state(5) to be set 00:30:30.458 [2024-07-15 10:11:43.898784] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a20310 is same with the state(5) to be set 00:30:30.458 [2024-07-15 10:11:43.898789] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a20310 is same with the state(5) to be set 00:30:30.458 [2024-07-15 10:11:43.898795] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a20310 is same with the state(5) to be set 00:30:30.458 [2024-07-15 10:11:43.898800] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a20310 is same with the state(5) to be set 00:30:30.458 [2024-07-15 10:11:43.898806] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a20310 is same with the state(5) to be set 00:30:30.458 [2024-07-15 10:11:43.898811] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a20310 is same with the state(5) to be set 00:30:30.458 [2024-07-15 10:11:43.898817] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a20310 is same with the state(5) to be set 00:30:30.458 [2024-07-15 10:11:43.898822] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a20310 is same with the state(5) to be set 00:30:30.458 [2024-07-15 10:11:43.898827] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a20310 is same with the state(5) to be set 00:30:30.458 [2024-07-15 10:11:43.898832] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a20310 is same with the state(5) to be set 00:30:30.458 [2024-07-15 10:11:43.898837] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a20310 is same with the state(5) to be set 00:30:30.458 [2024-07-15 10:11:43.898843] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a20310 is same with the state(5) to be set 00:30:30.458 [2024-07-15 10:11:43.898848] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a20310 is same with the state(5) to be set 00:30:30.458 10:11:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:30:31.395 10:11:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:30:31.395 10:11:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95507 00:30:31.395 10:11:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:30:31.395 10:11:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94664 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:30:38.001 10:11:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:30:38.001 10:11:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:30:38.001 10:11:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:30:38.001 10:11:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:38.001 Attaching 4 probes... 00:30:38.001 @path[10.0.0.2, 4420]: 21057 00:30:38.001 @path[10.0.0.2, 4420]: 21607 00:30:38.001 @path[10.0.0.2, 4420]: 22199 00:30:38.001 @path[10.0.0.2, 4420]: 22657 00:30:38.001 @path[10.0.0.2, 4420]: 22245 00:30:38.001 10:11:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:30:38.001 10:11:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:30:38.001 10:11:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:30:38.001 10:11:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:30:38.001 10:11:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:30:38.002 10:11:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:30:38.002 10:11:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95507 00:30:38.002 10:11:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:38.002 10:11:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:38.002 [2024-07-15 10:11:51.357857] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:38.002 10:11:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:38.002 10:11:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:30:44.572 10:11:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:30:44.572 10:11:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94664 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:30:44.572 10:11:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95700 00:30:44.572 10:11:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:30:51.159 10:12:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:30:51.159 10:12:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:30:51.159 10:12:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:30:51.159 10:12:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:51.159 Attaching 4 probes... 00:30:51.159 @path[10.0.0.2, 4421]: 20608 00:30:51.159 @path[10.0.0.2, 4421]: 21760 00:30:51.159 @path[10.0.0.2, 4421]: 21226 00:30:51.159 @path[10.0.0.2, 4421]: 18609 00:30:51.159 @path[10.0.0.2, 4421]: 18620 00:30:51.159 10:12:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:30:51.159 10:12:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:30:51.159 10:12:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:30:51.159 10:12:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:30:51.159 10:12:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:30:51.159 10:12:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:30:51.159 10:12:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95700 00:30:51.159 10:12:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:51.159 10:12:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 94762 00:30:51.159 10:12:03 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 94762 ']' 00:30:51.159 10:12:03 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 94762 00:30:51.159 10:12:03 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:30:51.159 10:12:03 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:51.159 10:12:03 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94762 00:30:51.159 killing process with pid 94762 00:30:51.159 10:12:03 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:30:51.159 10:12:03 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:30:51.159 10:12:03 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94762' 00:30:51.159 10:12:03 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 94762 00:30:51.159 10:12:03 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 94762 00:30:51.159 Connection closed with partial response: 00:30:51.159 00:30:51.159 00:30:51.159 10:12:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 94762 00:30:51.159 10:12:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:30:51.159 [2024-07-15 10:11:07.634938] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:30:51.159 [2024-07-15 10:11:07.635021] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94762 ] 00:30:51.159 [2024-07-15 10:11:07.770316] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:51.159 [2024-07-15 10:11:07.876227] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:51.160 Running I/O for 90 seconds... 00:30:51.160 [2024-07-15 10:11:17.496333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.160 [2024-07-15 10:11:17.496418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:51.160 [2024-07-15 10:11:17.496470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:36400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.160 [2024-07-15 10:11:17.496482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:51.160 [2024-07-15 10:11:17.496499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:36408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.160 [2024-07-15 10:11:17.496510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:51.160 [2024-07-15 10:11:17.496525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:36416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.160 [2024-07-15 10:11:17.496535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.160 [2024-07-15 10:11:17.496552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:36424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.160 [2024-07-15 10:11:17.496562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.160 [2024-07-15 10:11:17.496577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:36432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.160 [2024-07-15 10:11:17.496586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:51.160 [2024-07-15 10:11:17.496602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:36440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.160 [2024-07-15 10:11:17.496611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:51.160 [2024-07-15 10:11:17.496626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:36448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.160 [2024-07-15 10:11:17.496636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:51.160 [2024-07-15 10:11:17.496651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:36456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.160 [2024-07-15 10:11:17.496671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:51.160 [2024-07-15 10:11:17.496687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:36464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.160 [2024-07-15 10:11:17.496697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:51.160 [2024-07-15 10:11:17.496712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:36472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.160 [2024-07-15 10:11:17.496738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:51.160 [2024-07-15 10:11:17.496755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:36480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.160 [2024-07-15 10:11:17.496765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:51.160 [2024-07-15 10:11:17.496780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:36488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.160 [2024-07-15 10:11:17.496790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:51.160 [2024-07-15 10:11:17.496805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:36496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.160 [2024-07-15 10:11:17.496815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:51.160 [2024-07-15 10:11:17.496831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:36504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.160 [2024-07-15 10:11:17.496840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:51.160 [2024-07-15 10:11:17.496855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:36512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.160 [2024-07-15 10:11:17.496866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:51.160 [2024-07-15 10:11:17.496882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:36520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.160 [2024-07-15 10:11:17.496892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:51.160 [2024-07-15 10:11:17.497088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:37040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.160 [2024-07-15 10:11:17.497103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:51.160 [2024-07-15 10:11:17.497121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:37048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.160 [2024-07-15 10:11:17.497131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:51.160 [2024-07-15 10:11:17.497148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:37056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.160 [2024-07-15 10:11:17.497158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:51.160 [2024-07-15 10:11:17.497174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:37064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.160 [2024-07-15 10:11:17.497183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:51.160 [2024-07-15 10:11:17.497199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:37072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.160 [2024-07-15 10:11:17.497208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:51.160 [2024-07-15 10:11:17.497224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:37080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.160 [2024-07-15 10:11:17.497235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:51.160 [2024-07-15 10:11:17.497260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:37088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.160 [2024-07-15 10:11:17.497270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:51.160 [2024-07-15 10:11:17.498483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.160 [2024-07-15 10:11:17.498506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:51.160 [2024-07-15 10:11:17.498525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:37104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.160 [2024-07-15 10:11:17.498535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:51.160 [2024-07-15 10:11:17.498551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:37112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.160 [2024-07-15 10:11:17.498561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:51.160 [2024-07-15 10:11:17.498576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:37120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.160 [2024-07-15 10:11:17.498586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:51.160 [2024-07-15 10:11:17.498602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:37128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.160 [2024-07-15 10:11:17.498612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:51.160 [2024-07-15 10:11:17.498628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:37136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.160 [2024-07-15 10:11:17.498638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:51.160 [2024-07-15 10:11:17.498653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:37144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.160 [2024-07-15 10:11:17.498674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:51.160 [2024-07-15 10:11:17.498689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:37152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.160 [2024-07-15 10:11:17.498699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:51.160 [2024-07-15 10:11:17.498715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:37160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.160 [2024-07-15 10:11:17.498725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:51.160 [2024-07-15 10:11:17.498742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:37168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.160 [2024-07-15 10:11:17.498758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:51.160 [2024-07-15 10:11:17.498774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:37176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.160 [2024-07-15 10:11:17.498783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:51.160 [2024-07-15 10:11:17.498808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:37184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.160 [2024-07-15 10:11:17.498818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:51.160 [2024-07-15 10:11:17.498834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:37192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.160 [2024-07-15 10:11:17.498844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.160 [2024-07-15 10:11:17.498859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:37200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.160 [2024-07-15 10:11:17.498869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:51.160 [2024-07-15 10:11:17.498885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:37208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.160 [2024-07-15 10:11:17.498894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:51.160 [2024-07-15 10:11:17.498910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.160 [2024-07-15 10:11:17.498920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:51.160 [2024-07-15 10:11:17.498936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:37224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.160 [2024-07-15 10:11:17.498945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:51.160 [2024-07-15 10:11:17.498961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:37232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.160 [2024-07-15 10:11:17.498971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:51.161 [2024-07-15 10:11:17.498986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:37240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.161 [2024-07-15 10:11:17.498996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:51.161 [2024-07-15 10:11:17.499011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.161 [2024-07-15 10:11:17.499020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:51.161 [2024-07-15 10:11:17.499036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:37256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.161 [2024-07-15 10:11:17.499046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:51.161 [2024-07-15 10:11:17.499062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.161 [2024-07-15 10:11:17.499072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:51.161 [2024-07-15 10:11:17.499087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:37272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.161 [2024-07-15 10:11:17.499096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:51.161 [2024-07-15 10:11:17.499113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:37280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.161 [2024-07-15 10:11:17.499127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:51.161 [2024-07-15 10:11:17.499144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.161 [2024-07-15 10:11:17.499153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:51.161 [2024-07-15 10:11:17.499169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:37296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.161 [2024-07-15 10:11:17.499181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:51.161 [2024-07-15 10:11:17.499196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:37304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.161 [2024-07-15 10:11:17.499206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:51.161 [2024-07-15 10:11:17.499221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:37312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.161 [2024-07-15 10:11:17.499231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:51.161 [2024-07-15 10:11:17.499247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:37320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.161 [2024-07-15 10:11:17.499256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:51.161 [2024-07-15 10:11:17.499272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.161 [2024-07-15 10:11:17.499282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:51.161 [2024-07-15 10:11:17.499298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:37336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.161 [2024-07-15 10:11:17.499307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:51.161 [2024-07-15 10:11:17.499322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:37344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.161 [2024-07-15 10:11:17.499332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:51.161 [2024-07-15 10:11:17.499347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:37352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.161 [2024-07-15 10:11:17.499357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:51.161 [2024-07-15 10:11:17.499373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:36528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.161 [2024-07-15 10:11:17.499382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:51.161 [2024-07-15 10:11:17.499811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:36536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.161 [2024-07-15 10:11:17.499829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:51.161 [2024-07-15 10:11:17.499847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:36544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.161 [2024-07-15 10:11:17.499866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:51.161 [2024-07-15 10:11:17.499882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:36552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.161 [2024-07-15 10:11:17.499892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:51.161 [2024-07-15 10:11:17.499907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:36560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.161 [2024-07-15 10:11:17.499917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:51.161 [2024-07-15 10:11:17.499932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:36568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.161 [2024-07-15 10:11:17.499942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:51.161 [2024-07-15 10:11:17.499959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:36576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.161 [2024-07-15 10:11:17.499968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:51.161 [2024-07-15 10:11:17.499985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:36584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.161 [2024-07-15 10:11:17.499995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:51.161 [2024-07-15 10:11:17.500011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:36592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.161 [2024-07-15 10:11:17.500024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:51.161 [2024-07-15 10:11:17.500040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:36600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.161 [2024-07-15 10:11:17.500049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:51.161 [2024-07-15 10:11:17.500065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.161 [2024-07-15 10:11:17.500075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:51.161 [2024-07-15 10:11:17.500091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:36616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.161 [2024-07-15 10:11:17.500101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.161 [2024-07-15 10:11:17.500117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:36624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.161 [2024-07-15 10:11:17.500126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:51.161 [2024-07-15 10:11:17.500142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:36632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.161 [2024-07-15 10:11:17.500152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:51.161 [2024-07-15 10:11:17.500168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:36640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.161 [2024-07-15 10:11:17.500177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:51.161 [2024-07-15 10:11:17.500198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:36648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.161 [2024-07-15 10:11:17.500208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:51.161 [2024-07-15 10:11:17.500224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:37360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.161 [2024-07-15 10:11:17.500233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:51.161 [2024-07-15 10:11:17.500249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:37368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.161 [2024-07-15 10:11:17.500259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:51.161 [2024-07-15 10:11:17.500274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:37376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.161 [2024-07-15 10:11:17.500284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:51.161 [2024-07-15 10:11:17.500300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:37384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.161 [2024-07-15 10:11:17.500309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:51.161 [2024-07-15 10:11:17.500325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:37392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.161 [2024-07-15 10:11:17.500334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:51.161 [2024-07-15 10:11:17.500350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:37400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.161 [2024-07-15 10:11:17.500359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:51.161 [2024-07-15 10:11:17.500389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:37408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.161 [2024-07-15 10:11:17.500399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:51.161 [2024-07-15 10:11:17.500415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:36656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.161 [2024-07-15 10:11:17.500425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:51.161 [2024-07-15 10:11:17.500441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:36664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.162 [2024-07-15 10:11:17.500452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:51.162 [2024-07-15 10:11:17.500468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:36672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.162 [2024-07-15 10:11:17.500477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:51.162 [2024-07-15 10:11:17.500493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:36680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.162 [2024-07-15 10:11:17.500503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:51.162 [2024-07-15 10:11:17.500523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:36688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.162 [2024-07-15 10:11:17.500533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:51.162 [2024-07-15 10:11:17.500548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.162 [2024-07-15 10:11:17.500558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:51.162 [2024-07-15 10:11:17.500573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:36704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.162 [2024-07-15 10:11:17.500583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:51.162 [2024-07-15 10:11:17.500598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:37416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.162 [2024-07-15 10:11:17.500608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:51.162 [2024-07-15 10:11:17.500624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:36712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.162 [2024-07-15 10:11:17.500634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:51.162 [2024-07-15 10:11:17.500650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:36720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.162 [2024-07-15 10:11:17.500670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:51.162 [2024-07-15 10:11:17.500687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:36728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.162 [2024-07-15 10:11:17.500696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:51.162 [2024-07-15 10:11:17.500712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:36736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.162 [2024-07-15 10:11:17.500722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:51.162 [2024-07-15 10:11:17.500737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:36744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.162 [2024-07-15 10:11:17.500747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:51.162 [2024-07-15 10:11:17.500763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:36752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.162 [2024-07-15 10:11:17.500773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:51.162 [2024-07-15 10:11:17.500789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:36760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.162 [2024-07-15 10:11:17.500798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:51.162 [2024-07-15 10:11:17.500816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:36768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.162 [2024-07-15 10:11:17.500825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:51.162 [2024-07-15 10:11:17.500841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:36776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.162 [2024-07-15 10:11:17.500854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:51.162 [2024-07-15 10:11:17.500870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.162 [2024-07-15 10:11:17.500882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:51.162 [2024-07-15 10:11:17.500897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:36792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.162 [2024-07-15 10:11:17.500907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:51.162 [2024-07-15 10:11:17.500923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:36800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.162 [2024-07-15 10:11:17.500932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:51.162 [2024-07-15 10:11:17.500949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.162 [2024-07-15 10:11:17.500958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.162 [2024-07-15 10:11:17.500974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:36816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.162 [2024-07-15 10:11:17.500983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:51.162 [2024-07-15 10:11:17.501000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.162 [2024-07-15 10:11:17.501010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:51.162 [2024-07-15 10:11:17.501025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:36832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.162 [2024-07-15 10:11:17.501034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:51.162 [2024-07-15 10:11:17.501050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:36840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.162 [2024-07-15 10:11:17.501060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:51.162 [2024-07-15 10:11:17.501076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:36848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.162 [2024-07-15 10:11:17.501086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:51.162 [2024-07-15 10:11:17.501101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:36856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.162 [2024-07-15 10:11:17.501110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:51.162 [2024-07-15 10:11:17.501127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:36864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.162 [2024-07-15 10:11:17.501136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:51.162 [2024-07-15 10:11:17.501153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:36872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.162 [2024-07-15 10:11:17.501166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:51.162 [2024-07-15 10:11:17.501182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:36880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.162 [2024-07-15 10:11:17.501191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:51.162 [2024-07-15 10:11:17.501207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:36888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.162 [2024-07-15 10:11:17.501217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:51.162 [2024-07-15 10:11:17.501234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:36896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.162 [2024-07-15 10:11:17.501244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:51.162 [2024-07-15 10:11:17.501260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:36904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.162 [2024-07-15 10:11:17.501270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:51.162 [2024-07-15 10:11:17.501286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:36912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.162 [2024-07-15 10:11:17.501296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:51.162 [2024-07-15 10:11:17.501312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:36920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.162 [2024-07-15 10:11:17.501322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:51.162 [2024-07-15 10:11:17.501337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.162 [2024-07-15 10:11:17.501347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:51.162 [2024-07-15 10:11:17.501363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:36936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.162 [2024-07-15 10:11:17.501373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:51.162 [2024-07-15 10:11:17.501389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:36944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.162 [2024-07-15 10:11:17.501398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:51.162 [2024-07-15 10:11:17.501414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:36952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.162 [2024-07-15 10:11:17.501424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:51.162 [2024-07-15 10:11:17.501439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:36960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.162 [2024-07-15 10:11:17.501449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:51.162 [2024-07-15 10:11:17.501465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:36968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.162 [2024-07-15 10:11:17.501478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:51.162 [2024-07-15 10:11:17.501494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:36976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.162 [2024-07-15 10:11:17.501504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:51.163 [2024-07-15 10:11:17.501520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:36984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.163 [2024-07-15 10:11:17.501529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:51.163 [2024-07-15 10:11:17.501545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:36992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.163 [2024-07-15 10:11:17.501555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:51.163 [2024-07-15 10:11:17.501571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:37000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.163 [2024-07-15 10:11:17.501581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:51.163 [2024-07-15 10:11:17.501597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:37008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.163 [2024-07-15 10:11:17.501606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:51.163 [2024-07-15 10:11:17.501622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:37016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.163 [2024-07-15 10:11:17.501631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:51.163 [2024-07-15 10:11:17.501649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:37024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.163 [2024-07-15 10:11:17.501667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:51.163 [2024-07-15 10:11:23.948559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:83648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.163 [2024-07-15 10:11:23.948625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:51.163 [2024-07-15 10:11:23.948689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:83752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.163 [2024-07-15 10:11:23.948703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:51.163 [2024-07-15 10:11:23.948719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:83760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.163 [2024-07-15 10:11:23.948729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:51.163 [2024-07-15 10:11:23.948745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:83768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.163 [2024-07-15 10:11:23.948754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:51.163 [2024-07-15 10:11:23.948770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:83776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.163 [2024-07-15 10:11:23.948779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:51.163 [2024-07-15 10:11:23.948817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:83784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.163 [2024-07-15 10:11:23.948827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:51.163 [2024-07-15 10:11:23.948842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:83792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.163 [2024-07-15 10:11:23.948852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:51.163 [2024-07-15 10:11:23.948878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.163 [2024-07-15 10:11:23.948886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:51.163 [2024-07-15 10:11:23.948900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:83808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.163 [2024-07-15 10:11:23.948908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:51.163 [2024-07-15 10:11:23.948922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:83816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.163 [2024-07-15 10:11:23.948930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:51.163 [2024-07-15 10:11:23.948944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:83824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.163 [2024-07-15 10:11:23.948953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:51.163 [2024-07-15 10:11:23.948967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.163 [2024-07-15 10:11:23.948975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:51.163 [2024-07-15 10:11:23.948989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:83840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.163 [2024-07-15 10:11:23.948997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:51.163 [2024-07-15 10:11:23.949011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:83848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.163 [2024-07-15 10:11:23.949019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:51.163 [2024-07-15 10:11:23.949034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:83856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.163 [2024-07-15 10:11:23.949042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:51.163 [2024-07-15 10:11:23.949056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:83864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.163 [2024-07-15 10:11:23.949065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:51.163 [2024-07-15 10:11:23.949079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:83872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.163 [2024-07-15 10:11:23.949087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:51.163 [2024-07-15 10:11:23.949107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:83880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.163 [2024-07-15 10:11:23.949117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:51.163 [2024-07-15 10:11:23.949131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:83888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.163 [2024-07-15 10:11:23.949139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:51.163 [2024-07-15 10:11:23.949153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:83896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.163 [2024-07-15 10:11:23.949161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:51.163 [2024-07-15 10:11:23.949175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:83904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.163 [2024-07-15 10:11:23.949183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:51.163 [2024-07-15 10:11:23.949197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:83912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.163 [2024-07-15 10:11:23.949205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:51.163 [2024-07-15 10:11:23.949219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:83920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.163 [2024-07-15 10:11:23.949227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:51.163 [2024-07-15 10:11:23.949241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:83928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.163 [2024-07-15 10:11:23.949250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:51.163 [2024-07-15 10:11:23.949745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:83936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.163 [2024-07-15 10:11:23.949768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:51.163 [2024-07-15 10:11:23.949786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:83944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.163 [2024-07-15 10:11:23.949794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:51.163 [2024-07-15 10:11:23.949811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:83952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.163 [2024-07-15 10:11:23.949820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:51.163 [2024-07-15 10:11:23.949836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:83960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.163 [2024-07-15 10:11:23.949844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:51.163 [2024-07-15 10:11:23.949859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:83968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.163 [2024-07-15 10:11:23.949868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:51.163 [2024-07-15 10:11:23.949884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:83976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.164 [2024-07-15 10:11:23.949900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.164 [2024-07-15 10:11:23.949917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:83656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-07-15 10:11:23.949926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:51.164 [2024-07-15 10:11:23.949942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:83664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-07-15 10:11:23.949951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:51.164 [2024-07-15 10:11:23.949968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:83672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-07-15 10:11:23.949976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:51.164 [2024-07-15 10:11:23.949992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:83680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-07-15 10:11:23.950001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:51.164 [2024-07-15 10:11:23.950017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-07-15 10:11:23.950026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:51.164 [2024-07-15 10:11:23.950041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:83696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-07-15 10:11:23.950050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:51.164 [2024-07-15 10:11:23.950065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:83704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-07-15 10:11:23.950074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:51.164 [2024-07-15 10:11:23.950090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:83712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-07-15 10:11:23.950099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:51.164 [2024-07-15 10:11:23.950114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:83720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-07-15 10:11:23.950123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:51.164 [2024-07-15 10:11:23.950138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:83728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-07-15 10:11:23.950147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:51.164 [2024-07-15 10:11:23.950162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:83736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-07-15 10:11:23.950171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:51.164 [2024-07-15 10:11:23.950186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:83744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-07-15 10:11:23.950201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:51.164 [2024-07-15 10:11:23.950217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:83984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.164 [2024-07-15 10:11:23.950226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:51.164 [2024-07-15 10:11:23.950241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:83992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.164 [2024-07-15 10:11:23.950250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:51.164 [2024-07-15 10:11:23.950266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:84000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.164 [2024-07-15 10:11:23.950274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:51.164 [2024-07-15 10:11:23.950290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:84008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.164 [2024-07-15 10:11:23.950298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:51.164 [2024-07-15 10:11:23.950314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:84016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.164 [2024-07-15 10:11:23.950322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:51.164 [2024-07-15 10:11:23.950337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:84024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.164 [2024-07-15 10:11:23.950346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:51.164 [2024-07-15 10:11:23.950361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:84032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.164 [2024-07-15 10:11:23.950369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:51.164 [2024-07-15 10:11:23.950384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:84040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.164 [2024-07-15 10:11:23.950393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:51.164 [2024-07-15 10:11:23.950408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:84048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.164 [2024-07-15 10:11:23.950417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:51.164 [2024-07-15 10:11:23.950433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.164 [2024-07-15 10:11:23.950441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:51.164 [2024-07-15 10:11:23.950456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:84064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.164 [2024-07-15 10:11:23.950464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:51.164 [2024-07-15 10:11:23.950480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:84072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.164 [2024-07-15 10:11:23.950489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:51.164 [2024-07-15 10:11:23.950508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:84080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.164 [2024-07-15 10:11:23.950517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:51.164 [2024-07-15 10:11:23.950533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.164 [2024-07-15 10:11:23.950541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:51.164 [2024-07-15 10:11:23.950557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:84096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.164 [2024-07-15 10:11:23.950565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:51.164 [2024-07-15 10:11:23.950581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:84104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.164 [2024-07-15 10:11:23.950589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:51.164 [2024-07-15 10:11:23.950604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:84112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.164 [2024-07-15 10:11:23.950613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:51.164 [2024-07-15 10:11:23.950628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.164 [2024-07-15 10:11:23.950636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:51.164 [2024-07-15 10:11:23.950651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:84128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.164 [2024-07-15 10:11:23.950670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.164 [2024-07-15 10:11:23.950686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:84136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.164 [2024-07-15 10:11:23.950694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.164 [2024-07-15 10:11:23.950709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:84144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.164 [2024-07-15 10:11:23.950718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:51.164 [2024-07-15 10:11:23.950733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:84152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.164 [2024-07-15 10:11:23.950741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:51.165 [2024-07-15 10:11:23.950757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:84160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.165 [2024-07-15 10:11:23.950766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:51.165 [2024-07-15 10:11:23.950782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:84168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.165 [2024-07-15 10:11:23.950791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:51.165 [2024-07-15 10:11:23.950811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:84176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.165 [2024-07-15 10:11:23.950820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:51.165 [2024-07-15 10:11:23.950835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:84184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.165 [2024-07-15 10:11:23.950844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:51.165 [2024-07-15 10:11:23.950859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:84192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.165 [2024-07-15 10:11:23.950868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:51.165 [2024-07-15 10:11:23.950883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:84200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.165 [2024-07-15 10:11:23.950892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:51.165 [2024-07-15 10:11:23.950907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:84208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.165 [2024-07-15 10:11:23.950916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:51.165 [2024-07-15 10:11:23.950932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:84216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.165 [2024-07-15 10:11:23.950940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:51.165 [2024-07-15 10:11:23.950956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:84224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.165 [2024-07-15 10:11:23.950964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:51.165 [2024-07-15 10:11:23.950979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.165 [2024-07-15 10:11:23.950988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:51.165 [2024-07-15 10:11:23.951003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:84240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.165 [2024-07-15 10:11:23.951012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:51.165 [2024-07-15 10:11:23.951129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:84248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.165 [2024-07-15 10:11:23.951141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:51.165 [2024-07-15 10:11:23.951160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:84256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.165 [2024-07-15 10:11:23.951169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:51.165 [2024-07-15 10:11:23.951187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:84264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.165 [2024-07-15 10:11:23.951196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:51.165 [2024-07-15 10:11:23.951214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.165 [2024-07-15 10:11:23.951229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:51.165 [2024-07-15 10:11:23.951247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:84280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.165 [2024-07-15 10:11:23.951256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:51.165 [2024-07-15 10:11:23.951274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:84288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.165 [2024-07-15 10:11:23.951283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:51.165 [2024-07-15 10:11:23.951301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:84296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.165 [2024-07-15 10:11:23.951309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:51.165 [2024-07-15 10:11:23.951328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.165 [2024-07-15 10:11:23.951336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:51.165 [2024-07-15 10:11:23.951355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:84312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.165 [2024-07-15 10:11:23.951363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:51.165 [2024-07-15 10:11:23.951382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:84320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.165 [2024-07-15 10:11:23.951390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:51.165 [2024-07-15 10:11:23.951409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:84328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.165 [2024-07-15 10:11:23.951418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:51.165 [2024-07-15 10:11:23.951436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:84336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.165 [2024-07-15 10:11:23.951445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:51.165 [2024-07-15 10:11:23.951463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:84344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.165 [2024-07-15 10:11:23.951472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:51.165 [2024-07-15 10:11:23.951491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:84352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.165 [2024-07-15 10:11:23.951499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:51.165 [2024-07-15 10:11:23.951517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:84360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.165 [2024-07-15 10:11:23.951525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:51.165 [2024-07-15 10:11:23.951544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:84368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.165 [2024-07-15 10:11:23.951556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:51.165 [2024-07-15 10:11:23.951575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:84376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.165 [2024-07-15 10:11:23.951584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:51.165 [2024-07-15 10:11:23.951603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:84384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.165 [2024-07-15 10:11:23.951611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:51.165 [2024-07-15 10:11:23.951629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:84392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.165 [2024-07-15 10:11:23.951638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.165 [2024-07-15 10:11:23.951656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:84400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.165 [2024-07-15 10:11:23.951674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:51.165 [2024-07-15 10:11:23.951692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:84408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.165 [2024-07-15 10:11:23.951701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:51.165 [2024-07-15 10:11:23.951719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:84416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.165 [2024-07-15 10:11:23.951728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:51.165 [2024-07-15 10:11:23.951746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:84424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.165 [2024-07-15 10:11:23.951755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:51.165 [2024-07-15 10:11:23.951773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.165 [2024-07-15 10:11:23.951781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:51.165 [2024-07-15 10:11:23.951804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:84440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.165 [2024-07-15 10:11:23.951813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:51.165 [2024-07-15 10:11:23.951831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:84448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.165 [2024-07-15 10:11:23.951840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:51.165 [2024-07-15 10:11:23.951858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.165 [2024-07-15 10:11:23.951867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:51.165 [2024-07-15 10:11:23.951889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:84464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.165 [2024-07-15 10:11:23.951897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:51.166 [2024-07-15 10:11:23.951920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.166 [2024-07-15 10:11:23.951929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:51.166 [2024-07-15 10:11:23.951947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:84480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.166 [2024-07-15 10:11:23.951956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:51.166 [2024-07-15 10:11:23.951974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:84488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.166 [2024-07-15 10:11:23.951983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:51.166 [2024-07-15 10:11:23.952001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:84496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.166 [2024-07-15 10:11:23.952010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:51.166 [2024-07-15 10:11:23.952028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:84504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.166 [2024-07-15 10:11:23.952037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:51.166 [2024-07-15 10:11:23.952055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:84512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.166 [2024-07-15 10:11:23.952065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:51.166 [2024-07-15 10:11:23.952083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:84520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.166 [2024-07-15 10:11:23.952091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:51.166 [2024-07-15 10:11:23.952109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:84528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.166 [2024-07-15 10:11:23.952118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:51.166 [2024-07-15 10:11:23.952136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:84536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.166 [2024-07-15 10:11:23.952145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:51.166 [2024-07-15 10:11:23.952163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:84544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.166 [2024-07-15 10:11:23.952171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:51.166 [2024-07-15 10:11:23.952190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:84552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.166 [2024-07-15 10:11:23.952198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:51.166 [2024-07-15 10:11:23.952216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:84560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.166 [2024-07-15 10:11:23.952225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:51.166 [2024-07-15 10:11:23.952248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:84568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.166 [2024-07-15 10:11:23.952258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:51.166 [2024-07-15 10:11:23.952276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:84576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.166 [2024-07-15 10:11:23.952285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:51.166 [2024-07-15 10:11:23.952303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:84584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.166 [2024-07-15 10:11:23.952312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:51.166 [2024-07-15 10:11:23.952331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:84592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.166 [2024-07-15 10:11:23.952340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:51.166 [2024-07-15 10:11:23.952358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:84600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.166 [2024-07-15 10:11:23.952367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:51.166 [2024-07-15 10:11:23.952392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:84608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.166 [2024-07-15 10:11:23.952401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:51.166 [2024-07-15 10:11:23.952419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:84616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.166 [2024-07-15 10:11:23.952428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:51.166 [2024-07-15 10:11:23.952446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:84624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.166 [2024-07-15 10:11:23.952454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:51.166 [2024-07-15 10:11:23.952473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:84632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.166 [2024-07-15 10:11:23.952481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:51.166 [2024-07-15 10:11:23.952500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:84640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.166 [2024-07-15 10:11:23.952508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:51.166 [2024-07-15 10:11:23.952526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.166 [2024-07-15 10:11:23.952534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.166 [2024-07-15 10:11:23.952553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:84656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.166 [2024-07-15 10:11:23.952561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:51.166 [2024-07-15 10:11:23.952579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:84664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.166 [2024-07-15 10:11:23.952592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:51.166 [2024-07-15 10:11:30.772041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.166 [2024-07-15 10:11:30.772101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.166 [2024-07-15 10:11:30.772128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.166 [2024-07-15 10:11:30.772139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:51.166 [2024-07-15 10:11:30.772155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.166 [2024-07-15 10:11:30.772166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:51.166 [2024-07-15 10:11:30.772182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:78200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.166 [2024-07-15 10:11:30.772191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:51.166 [2024-07-15 10:11:30.772207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:78208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.166 [2024-07-15 10:11:30.772216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:51.166 [2024-07-15 10:11:30.772232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:78216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.166 [2024-07-15 10:11:30.772241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:51.166 [2024-07-15 10:11:30.772257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:77736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.166 [2024-07-15 10:11:30.772266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:51.166 [2024-07-15 10:11:30.772282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:77744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.166 [2024-07-15 10:11:30.772292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:51.166 [2024-07-15 10:11:30.772308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.166 [2024-07-15 10:11:30.772317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:51.166 [2024-07-15 10:11:30.772333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:77760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.166 [2024-07-15 10:11:30.772342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:51.166 [2024-07-15 10:11:30.772358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:77768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.166 [2024-07-15 10:11:30.772367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:51.166 [2024-07-15 10:11:30.772390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:77776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.166 [2024-07-15 10:11:30.772420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:51.166 [2024-07-15 10:11:30.772437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:77784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.166 [2024-07-15 10:11:30.772447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:51.166 [2024-07-15 10:11:30.772462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:77792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.166 [2024-07-15 10:11:30.772472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:51.166 [2024-07-15 10:11:30.772488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:77800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.166 [2024-07-15 10:11:30.772497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:51.166 [2024-07-15 10:11:30.772513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:77808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.167 [2024-07-15 10:11:30.772522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:51.167 [2024-07-15 10:11:30.772538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:77816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.167 [2024-07-15 10:11:30.772547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:51.167 [2024-07-15 10:11:30.772564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:77824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.167 [2024-07-15 10:11:30.772573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:51.167 [2024-07-15 10:11:30.772588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:77832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.167 [2024-07-15 10:11:30.772597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:51.167 [2024-07-15 10:11:30.772614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:77840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.167 [2024-07-15 10:11:30.772623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:51.167 [2024-07-15 10:11:30.772639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:77848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.167 [2024-07-15 10:11:30.772648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:51.167 [2024-07-15 10:11:30.772674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:78224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.167 [2024-07-15 10:11:30.772684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:51.167 [2024-07-15 10:11:30.772700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:78232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.167 [2024-07-15 10:11:30.772710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:51.167 [2024-07-15 10:11:30.773269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:78240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.167 [2024-07-15 10:11:30.773285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:51.167 [2024-07-15 10:11:30.773313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:78248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.167 [2024-07-15 10:11:30.773323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:51.167 [2024-07-15 10:11:30.773339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:78256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.167 [2024-07-15 10:11:30.773349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:51.167 [2024-07-15 10:11:30.773365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:78264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.167 [2024-07-15 10:11:30.773375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:51.167 [2024-07-15 10:11:30.773391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:78272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.167 [2024-07-15 10:11:30.773401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:51.167 [2024-07-15 10:11:30.773417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:78280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.167 [2024-07-15 10:11:30.773426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:51.167 [2024-07-15 10:11:30.773442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:78288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.167 [2024-07-15 10:11:30.773452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:51.167 [2024-07-15 10:11:30.773468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:78296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.167 [2024-07-15 10:11:30.773478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:51.167 [2024-07-15 10:11:30.773493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:78304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.167 [2024-07-15 10:11:30.773503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.167 [2024-07-15 10:11:30.773519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:78312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.167 [2024-07-15 10:11:30.773528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.167 [2024-07-15 10:11:30.773545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:78320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.167 [2024-07-15 10:11:30.773554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:51.167 [2024-07-15 10:11:30.773570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:78328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.167 [2024-07-15 10:11:30.773580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:51.167 [2024-07-15 10:11:30.773596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:78336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.167 [2024-07-15 10:11:30.773606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:51.167 [2024-07-15 10:11:30.773626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:78344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.167 [2024-07-15 10:11:30.773636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:51.167 [2024-07-15 10:11:30.773652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:78352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.167 [2024-07-15 10:11:30.773673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:51.167 [2024-07-15 10:11:30.773689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:78360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.167 [2024-07-15 10:11:30.773699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:51.167 [2024-07-15 10:11:30.773714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:78368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.167 [2024-07-15 10:11:30.773724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:51.167 [2024-07-15 10:11:30.773740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:78376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.167 [2024-07-15 10:11:30.773749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:51.167 [2024-07-15 10:11:30.773765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:78384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.167 [2024-07-15 10:11:30.773774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:51.167 [2024-07-15 10:11:30.773790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:78392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.167 [2024-07-15 10:11:30.773799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:51.167 [2024-07-15 10:11:30.773814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.167 [2024-07-15 10:11:30.773824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:51.167 [2024-07-15 10:11:30.773840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:78408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.167 [2024-07-15 10:11:30.773849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:51.167 [2024-07-15 10:11:30.773865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.167 [2024-07-15 10:11:30.773874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:51.167 [2024-07-15 10:11:30.773889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:78424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.167 [2024-07-15 10:11:30.773899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:51.167 [2024-07-15 10:11:30.773914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:78432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.167 [2024-07-15 10:11:30.773924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:51.167 [2024-07-15 10:11:30.773940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:78440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.167 [2024-07-15 10:11:30.773954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:51.167 [2024-07-15 10:11:30.773970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:78448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.167 [2024-07-15 10:11:30.773979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:51.167 [2024-07-15 10:11:30.773995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:78456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.167 [2024-07-15 10:11:30.774004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:51.167 [2024-07-15 10:11:30.774021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:78464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.167 [2024-07-15 10:11:30.774031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:51.167 [2024-07-15 10:11:30.774047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:78472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.167 [2024-07-15 10:11:30.774057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:51.167 [2024-07-15 10:11:30.774073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:78480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.167 [2024-07-15 10:11:30.774082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:51.167 [2024-07-15 10:11:30.774098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:78488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.167 [2024-07-15 10:11:30.774107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:51.167 [2024-07-15 10:11:30.774123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:78496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.167 [2024-07-15 10:11:30.774132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:51.167 [2024-07-15 10:11:30.774148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:78504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.168 [2024-07-15 10:11:30.774158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:51.168 [2024-07-15 10:11:30.774532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:78512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.168 [2024-07-15 10:11:30.774551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:51.168 [2024-07-15 10:11:30.774568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:78520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.168 [2024-07-15 10:11:30.774578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:51.168 [2024-07-15 10:11:30.774595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.168 [2024-07-15 10:11:30.774604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:51.168 [2024-07-15 10:11:30.774621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:78536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.168 [2024-07-15 10:11:30.774638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:51.168 [2024-07-15 10:11:30.774654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.168 [2024-07-15 10:11:30.774677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:51.168 [2024-07-15 10:11:30.774693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:78552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.168 [2024-07-15 10:11:30.774703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:51.168 [2024-07-15 10:11:30.774719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:78560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.168 [2024-07-15 10:11:30.774728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:51.168 [2024-07-15 10:11:30.774744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:78568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.168 [2024-07-15 10:11:30.774753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.168 [2024-07-15 10:11:30.774769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.168 [2024-07-15 10:11:30.774779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:51.168 [2024-07-15 10:11:30.774795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:78584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.168 [2024-07-15 10:11:30.774805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:51.168 [2024-07-15 10:11:30.774821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:78592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.168 [2024-07-15 10:11:30.774830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:51.168 [2024-07-15 10:11:30.774845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.168 [2024-07-15 10:11:30.774856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:51.168 [2024-07-15 10:11:30.774872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:78608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.168 [2024-07-15 10:11:30.774881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:51.168 [2024-07-15 10:11:30.774897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:78616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.168 [2024-07-15 10:11:30.774906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:51.168 [2024-07-15 10:11:30.774922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.168 [2024-07-15 10:11:30.774932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:51.168 [2024-07-15 10:11:30.774947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:78632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.168 [2024-07-15 10:11:30.774957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:51.168 [2024-07-15 10:11:30.774977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:78640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.168 [2024-07-15 10:11:30.774988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:51.168 [2024-07-15 10:11:30.775003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:78648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.168 [2024-07-15 10:11:30.775014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:51.168 [2024-07-15 10:11:30.775030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:78656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.168 [2024-07-15 10:11:30.775039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:51.168 [2024-07-15 10:11:30.775055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:78664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.168 [2024-07-15 10:11:30.775065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:51.168 [2024-07-15 10:11:30.775081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.168 [2024-07-15 10:11:30.775090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:51.168 [2024-07-15 10:11:30.775106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:78680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.168 [2024-07-15 10:11:30.775116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:51.168 [2024-07-15 10:11:30.775132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:78688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.168 [2024-07-15 10:11:30.775141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:51.168 [2024-07-15 10:11:30.775157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:78696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.168 [2024-07-15 10:11:30.775167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:51.168 [2024-07-15 10:11:30.775183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:78704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.168 [2024-07-15 10:11:30.775192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:51.168 [2024-07-15 10:11:30.775208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:78712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.168 [2024-07-15 10:11:30.775218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:51.168 [2024-07-15 10:11:30.775234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:78720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.168 [2024-07-15 10:11:30.775244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:51.168 [2024-07-15 10:11:30.775260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:78728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.168 [2024-07-15 10:11:30.775270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:51.168 [2024-07-15 10:11:30.775290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:78736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.168 [2024-07-15 10:11:30.775299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:51.168 [2024-07-15 10:11:30.775315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.168 [2024-07-15 10:11:30.775325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:51.168 [2024-07-15 10:11:30.775341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.168 [2024-07-15 10:11:30.775350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:51.168 [2024-07-15 10:11:30.775366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:77856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.169 [2024-07-15 10:11:30.775376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:51.169 [2024-07-15 10:11:30.775392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:77864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.169 [2024-07-15 10:11:30.775401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:51.169 [2024-07-15 10:11:30.775417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:77872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.169 [2024-07-15 10:11:30.775427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:51.169 [2024-07-15 10:11:30.775443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:77880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.169 [2024-07-15 10:11:30.775452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:51.169 [2024-07-15 10:11:30.775468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:77888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.169 [2024-07-15 10:11:30.775477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:51.169 [2024-07-15 10:11:30.775493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:77896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.169 [2024-07-15 10:11:30.775503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:51.169 [2024-07-15 10:11:30.775519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:77904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.169 [2024-07-15 10:11:30.775528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:51.169 [2024-07-15 10:11:30.775544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:77912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.169 [2024-07-15 10:11:30.775555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:51.169 [2024-07-15 10:11:30.775570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:77920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.169 [2024-07-15 10:11:30.775579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.169 [2024-07-15 10:11:30.775596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:77928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.169 [2024-07-15 10:11:30.775613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:51.169 [2024-07-15 10:11:30.775629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:77936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.169 [2024-07-15 10:11:30.775639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:51.169 [2024-07-15 10:11:30.775654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:77944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.169 [2024-07-15 10:11:30.775673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:51.169 [2024-07-15 10:11:30.775695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:77952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.169 [2024-07-15 10:11:30.775705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:51.169 [2024-07-15 10:11:30.775723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.169 [2024-07-15 10:11:30.775733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:51.169 [2024-07-15 10:11:30.775749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:77968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.169 [2024-07-15 10:11:30.775759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:51.169 [2024-07-15 10:11:30.775774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.169 [2024-07-15 10:11:30.775784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:51.169 [2024-07-15 10:11:30.775800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:77984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.169 [2024-07-15 10:11:30.775809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:51.169 [2024-07-15 10:11:30.775826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:77992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.169 [2024-07-15 10:11:30.775845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:51.169 [2024-07-15 10:11:30.775860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.169 [2024-07-15 10:11:30.775869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:51.169 [2024-07-15 10:11:30.775882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.169 [2024-07-15 10:11:30.775891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:51.169 [2024-07-15 10:11:30.775906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:78016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.169 [2024-07-15 10:11:30.775914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:51.169 [2024-07-15 10:11:30.775928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.169 [2024-07-15 10:11:30.775941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:51.169 [2024-07-15 10:11:30.775955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:78032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.169 [2024-07-15 10:11:30.775964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:51.169 [2024-07-15 10:11:30.775978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:78040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.169 [2024-07-15 10:11:30.775987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:51.169 [2024-07-15 10:11:30.776452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:78048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.169 [2024-07-15 10:11:30.776472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:51.169 [2024-07-15 10:11:30.776491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:78056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.169 [2024-07-15 10:11:30.776500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:51.169 [2024-07-15 10:11:30.776517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:78064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.169 [2024-07-15 10:11:30.776527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:51.169 [2024-07-15 10:11:30.776543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.169 [2024-07-15 10:11:30.776553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:51.169 [2024-07-15 10:11:30.776572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:78080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.169 [2024-07-15 10:11:30.776581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:51.169 [2024-07-15 10:11:30.776598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:78088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.169 [2024-07-15 10:11:30.776608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:51.169 [2024-07-15 10:11:30.776624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:78096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.169 [2024-07-15 10:11:30.776634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:51.169 [2024-07-15 10:11:30.776649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:78104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.169 [2024-07-15 10:11:30.776659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:51.169 [2024-07-15 10:11:30.776685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:78112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.169 [2024-07-15 10:11:30.776696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:51.169 [2024-07-15 10:11:30.776712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:78120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.169 [2024-07-15 10:11:30.776722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:51.169 [2024-07-15 10:11:30.776745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:78128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.169 [2024-07-15 10:11:30.776755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:51.169 [2024-07-15 10:11:30.776771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:78136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.169 [2024-07-15 10:11:30.776781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:51.169 [2024-07-15 10:11:30.776797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.169 [2024-07-15 10:11:30.776806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:51.169 [2024-07-15 10:11:30.776822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:78152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.169 [2024-07-15 10:11:30.776832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:51.169 [2024-07-15 10:11:30.776848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:78160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.169 [2024-07-15 10:11:30.776858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:51.170 [2024-07-15 10:11:30.776874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:78168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.170 [2024-07-15 10:11:30.776883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:51.170 [2024-07-15 10:11:30.776899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:78176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.170 [2024-07-15 10:11:30.776908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.170 [2024-07-15 10:11:30.776924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:78184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.170 [2024-07-15 10:11:30.776934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:51.170 [2024-07-15 10:11:30.776949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:78192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.170 [2024-07-15 10:11:30.776959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:51.170 [2024-07-15 10:11:30.776976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.170 [2024-07-15 10:11:30.776985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:51.170 [2024-07-15 10:11:30.777002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.170 [2024-07-15 10:11:30.777012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:51.170 [2024-07-15 10:11:30.777029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.170 [2024-07-15 10:11:30.777039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:51.170 [2024-07-15 10:11:30.777060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:77736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.170 [2024-07-15 10:11:30.777069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:51.170 [2024-07-15 10:11:30.777086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.170 [2024-07-15 10:11:30.777095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:51.170 [2024-07-15 10:11:30.777110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:77752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.170 [2024-07-15 10:11:30.777120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:51.170 [2024-07-15 10:11:30.777136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:77760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.170 [2024-07-15 10:11:30.777146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:51.170 [2024-07-15 10:11:30.777162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:77768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.170 [2024-07-15 10:11:30.777171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:51.170 [2024-07-15 10:11:30.777188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.170 [2024-07-15 10:11:30.777197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:51.170 [2024-07-15 10:11:30.777212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:77784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.170 [2024-07-15 10:11:30.777222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:51.170 [2024-07-15 10:11:30.777237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:77792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.170 [2024-07-15 10:11:30.777247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:51.170 [2024-07-15 10:11:30.777263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:77800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.170 [2024-07-15 10:11:30.777273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:51.170 [2024-07-15 10:11:30.777288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:77808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.170 [2024-07-15 10:11:30.777298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:51.170 [2024-07-15 10:11:30.777313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:77816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.170 [2024-07-15 10:11:30.777323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:51.170 [2024-07-15 10:11:30.777339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.170 [2024-07-15 10:11:30.777348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:51.170 [2024-07-15 10:11:30.777364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:77832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.170 [2024-07-15 10:11:30.777377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:51.170 [2024-07-15 10:11:30.777393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:77840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.170 [2024-07-15 10:11:30.777403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:51.170 [2024-07-15 10:11:30.777421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:77848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.170 [2024-07-15 10:11:30.777430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:51.170 [2024-07-15 10:11:30.777447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:78224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.170 [2024-07-15 10:11:30.777458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:51.170 [2024-07-15 10:11:30.777473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:78232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.170 [2024-07-15 10:11:30.777483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:51.170 [2024-07-15 10:11:30.777499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:78240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.170 [2024-07-15 10:11:30.777508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:51.170 [2024-07-15 10:11:30.777524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:78248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.170 [2024-07-15 10:11:30.777533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:51.170 [2024-07-15 10:11:30.777549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:78256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.170 [2024-07-15 10:11:30.777559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:51.170 [2024-07-15 10:11:30.777574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:78264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.170 [2024-07-15 10:11:30.777584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:51.170 [2024-07-15 10:11:30.777600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:78272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.170 [2024-07-15 10:11:30.777609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:51.170 [2024-07-15 10:11:30.777625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:78280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.170 [2024-07-15 10:11:30.777635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:51.170 [2024-07-15 10:11:30.796037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:78288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.170 [2024-07-15 10:11:30.796087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:51.170 [2024-07-15 10:11:30.796114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:78296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.170 [2024-07-15 10:11:30.796144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:51.170 [2024-07-15 10:11:30.796167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:78304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.170 [2024-07-15 10:11:30.796181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.170 [2024-07-15 10:11:30.796203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:78312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.170 [2024-07-15 10:11:30.796217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.170 [2024-07-15 10:11:30.796239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.170 [2024-07-15 10:11:30.796252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:51.170 [2024-07-15 10:11:30.796274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:78328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.170 [2024-07-15 10:11:30.796288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:51.170 [2024-07-15 10:11:30.796310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.170 [2024-07-15 10:11:30.796324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:51.170 [2024-07-15 10:11:30.796347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:78344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.170 [2024-07-15 10:11:30.796360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:51.170 [2024-07-15 10:11:30.796397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:78352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.170 [2024-07-15 10:11:30.796411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:51.170 [2024-07-15 10:11:30.796433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:78360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.170 [2024-07-15 10:11:30.796446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:51.170 [2024-07-15 10:11:30.796467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:78368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.171 [2024-07-15 10:11:30.796480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:51.171 [2024-07-15 10:11:30.796501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:78376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.171 [2024-07-15 10:11:30.796514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:51.171 [2024-07-15 10:11:30.796536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:78384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.171 [2024-07-15 10:11:30.796549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:51.171 [2024-07-15 10:11:30.796571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:78392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.171 [2024-07-15 10:11:30.796584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:51.171 [2024-07-15 10:11:30.796613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:78400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.171 [2024-07-15 10:11:30.796627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:51.171 [2024-07-15 10:11:30.796648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:78408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.171 [2024-07-15 10:11:30.796676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:51.171 [2024-07-15 10:11:30.796699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:78416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.171 [2024-07-15 10:11:30.796712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:51.171 [2024-07-15 10:11:30.796733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:78424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.171 [2024-07-15 10:11:30.796747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:51.171 [2024-07-15 10:11:30.796769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:78432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.171 [2024-07-15 10:11:30.796781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:51.171 [2024-07-15 10:11:30.796803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:78440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.171 [2024-07-15 10:11:30.796816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:51.171 [2024-07-15 10:11:30.796838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:78448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.171 [2024-07-15 10:11:30.796851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:51.171 [2024-07-15 10:11:30.796872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:78456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.171 [2024-07-15 10:11:30.796885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:51.171 [2024-07-15 10:11:30.796907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:78464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.171 [2024-07-15 10:11:30.796921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:51.171 [2024-07-15 10:11:30.796942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:78472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.171 [2024-07-15 10:11:30.796955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:51.171 [2024-07-15 10:11:30.796977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:78480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.171 [2024-07-15 10:11:30.796990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:51.171 [2024-07-15 10:11:30.797012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:78488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.171 [2024-07-15 10:11:30.797025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:51.171 [2024-07-15 10:11:30.797054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:78496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.171 [2024-07-15 10:11:30.797067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:51.171 [2024-07-15 10:11:30.798084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:78504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.171 [2024-07-15 10:11:30.798111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:51.171 [2024-07-15 10:11:30.798139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:78512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.171 [2024-07-15 10:11:30.798153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:51.171 [2024-07-15 10:11:30.798174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:78520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.171 [2024-07-15 10:11:30.798188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:51.171 [2024-07-15 10:11:30.798210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:78528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.171 [2024-07-15 10:11:30.798223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:51.171 [2024-07-15 10:11:30.798245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:78536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.171 [2024-07-15 10:11:30.798258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:51.171 [2024-07-15 10:11:30.798279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:78544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.171 [2024-07-15 10:11:30.798292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:51.171 [2024-07-15 10:11:30.798313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:78552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.171 [2024-07-15 10:11:30.798326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:51.171 [2024-07-15 10:11:30.798347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:78560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.171 [2024-07-15 10:11:30.798360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:51.171 [2024-07-15 10:11:30.798381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:78568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.171 [2024-07-15 10:11:30.798394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.171 [2024-07-15 10:11:30.798416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:78576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.171 [2024-07-15 10:11:30.798428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:51.171 [2024-07-15 10:11:30.798450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:78584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.171 [2024-07-15 10:11:30.798464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:51.171 [2024-07-15 10:11:30.798486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:78592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.171 [2024-07-15 10:11:30.798510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:51.171 [2024-07-15 10:11:30.798532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.171 [2024-07-15 10:11:30.798545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:51.171 [2024-07-15 10:11:30.798567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:78608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.171 [2024-07-15 10:11:30.798580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:51.171 [2024-07-15 10:11:30.798602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:78616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.171 [2024-07-15 10:11:30.798615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:51.171 [2024-07-15 10:11:30.798636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.171 [2024-07-15 10:11:30.798649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:51.171 [2024-07-15 10:11:30.798687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:78632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.171 [2024-07-15 10:11:30.798701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:51.171 [2024-07-15 10:11:30.798723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:78640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.171 [2024-07-15 10:11:30.798735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:51.171 [2024-07-15 10:11:30.798757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:78648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.171 [2024-07-15 10:11:30.798770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:51.171 [2024-07-15 10:11:30.798791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:78656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.171 [2024-07-15 10:11:30.798805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:51.171 [2024-07-15 10:11:30.798826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:78664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.171 [2024-07-15 10:11:30.798839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:51.171 [2024-07-15 10:11:30.798861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.171 [2024-07-15 10:11:30.798874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:51.171 [2024-07-15 10:11:30.798895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.171 [2024-07-15 10:11:30.798908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:51.171 [2024-07-15 10:11:30.798930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:78688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.172 [2024-07-15 10:11:30.798949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:51.172 [2024-07-15 10:11:30.798971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:78696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.172 [2024-07-15 10:11:30.798984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:51.172 [2024-07-15 10:11:30.799005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:78704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.172 [2024-07-15 10:11:30.799018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:51.172 [2024-07-15 10:11:30.799040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:78712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.172 [2024-07-15 10:11:30.799053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:51.172 [2024-07-15 10:11:30.799074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:78720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.172 [2024-07-15 10:11:30.799087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:51.172 [2024-07-15 10:11:30.799109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:78728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.172 [2024-07-15 10:11:30.799122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:51.172 [2024-07-15 10:11:30.799143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:78736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.172 [2024-07-15 10:11:30.799157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:51.172 [2024-07-15 10:11:30.799179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.172 [2024-07-15 10:11:30.799192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:51.172 [2024-07-15 10:11:30.799213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.172 [2024-07-15 10:11:30.799226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:51.172 [2024-07-15 10:11:30.799248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:77856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.172 [2024-07-15 10:11:30.799261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:51.172 [2024-07-15 10:11:30.799283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:77864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.172 [2024-07-15 10:11:30.799296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:51.172 [2024-07-15 10:11:30.799318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:77872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.172 [2024-07-15 10:11:30.799331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:51.172 [2024-07-15 10:11:30.799352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:77880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.172 [2024-07-15 10:11:30.799365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:51.172 [2024-07-15 10:11:30.799392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:77888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.172 [2024-07-15 10:11:30.799405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:51.172 [2024-07-15 10:11:30.799427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:77896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.172 [2024-07-15 10:11:30.799440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:51.172 [2024-07-15 10:11:30.799461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:77904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.172 [2024-07-15 10:11:30.799474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:51.172 [2024-07-15 10:11:30.799495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:77912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.172 [2024-07-15 10:11:30.799509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:51.172 [2024-07-15 10:11:30.799530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:77920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.172 [2024-07-15 10:11:30.799543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.172 [2024-07-15 10:11:30.799565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:77928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.172 [2024-07-15 10:11:30.799577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:51.172 [2024-07-15 10:11:30.799599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:77936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.172 [2024-07-15 10:11:30.799612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:51.172 [2024-07-15 10:11:30.799634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:77944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.172 [2024-07-15 10:11:30.799647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:51.172 [2024-07-15 10:11:30.799679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:77952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.172 [2024-07-15 10:11:30.799693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:51.172 [2024-07-15 10:11:30.799715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:77960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.172 [2024-07-15 10:11:30.799728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:51.172 [2024-07-15 10:11:30.799749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:77968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.172 [2024-07-15 10:11:30.799762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:51.172 [2024-07-15 10:11:30.799784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.172 [2024-07-15 10:11:30.799797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:51.172 [2024-07-15 10:11:30.799824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:77984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.172 [2024-07-15 10:11:30.799837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:51.172 [2024-07-15 10:11:30.799859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:77992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.172 [2024-07-15 10:11:30.799872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:51.172 [2024-07-15 10:11:30.799893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.172 [2024-07-15 10:11:30.799906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:51.172 [2024-07-15 10:11:30.799927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.172 [2024-07-15 10:11:30.799940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:51.172 [2024-07-15 10:11:30.799962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.172 [2024-07-15 10:11:30.799975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:51.172 [2024-07-15 10:11:30.799996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:78024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.172 [2024-07-15 10:11:30.800009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:51.172 [2024-07-15 10:11:30.800031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:78032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.172 [2024-07-15 10:11:30.800044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:51.172 [2024-07-15 10:11:30.800767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.172 [2024-07-15 10:11:30.800808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:51.172 [2024-07-15 10:11:30.800833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:78048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.172 [2024-07-15 10:11:30.800847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:51.172 [2024-07-15 10:11:30.800868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:78056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.173 [2024-07-15 10:11:30.800882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:51.173 [2024-07-15 10:11:30.800905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:78064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.173 [2024-07-15 10:11:30.800918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:51.173 [2024-07-15 10:11:30.800940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:78072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.173 [2024-07-15 10:11:30.800954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:51.173 [2024-07-15 10:11:30.800985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:78080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.173 [2024-07-15 10:11:30.800999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:51.173 [2024-07-15 10:11:30.801020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.173 [2024-07-15 10:11:30.801033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:51.173 [2024-07-15 10:11:30.801055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:78096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.173 [2024-07-15 10:11:30.801068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:51.173 [2024-07-15 10:11:30.801089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:78104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.173 [2024-07-15 10:11:30.801102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:51.173 [2024-07-15 10:11:30.801124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:78112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.173 [2024-07-15 10:11:30.801137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:51.173 [2024-07-15 10:11:30.801159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.173 [2024-07-15 10:11:30.801172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:51.173 [2024-07-15 10:11:30.801194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.173 [2024-07-15 10:11:30.801206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:51.173 [2024-07-15 10:11:30.801228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:78136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.173 [2024-07-15 10:11:30.801241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:51.173 [2024-07-15 10:11:30.801262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:78144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.173 [2024-07-15 10:11:30.801275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:51.173 [2024-07-15 10:11:30.801297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:78152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.173 [2024-07-15 10:11:30.801310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:51.173 [2024-07-15 10:11:30.801332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:78160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.173 [2024-07-15 10:11:30.801345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:51.173 [2024-07-15 10:11:30.801367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:78168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.173 [2024-07-15 10:11:30.801379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:51.173 [2024-07-15 10:11:30.801401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:78176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.173 [2024-07-15 10:11:30.801419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.173 [2024-07-15 10:11:30.801441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:78184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.173 [2024-07-15 10:11:30.801455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:51.173 [2024-07-15 10:11:30.801477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:78192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.173 [2024-07-15 10:11:30.801490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:51.173 [2024-07-15 10:11:30.801511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.173 [2024-07-15 10:11:30.801524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:51.173 [2024-07-15 10:11:30.801545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:78208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.173 [2024-07-15 10:11:30.801558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:51.173 [2024-07-15 10:11:30.801580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.173 [2024-07-15 10:11:30.801593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:51.173 [2024-07-15 10:11:30.801614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:77736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.173 [2024-07-15 10:11:30.801628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:51.173 [2024-07-15 10:11:30.801649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:77744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.173 [2024-07-15 10:11:30.801676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:51.173 [2024-07-15 10:11:30.801698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:77752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.173 [2024-07-15 10:11:30.801711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:51.173 [2024-07-15 10:11:30.801733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:77760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.173 [2024-07-15 10:11:30.801745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:51.173 [2024-07-15 10:11:30.801766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:77768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.173 [2024-07-15 10:11:30.801779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:51.173 [2024-07-15 10:11:30.801827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:77776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.173 [2024-07-15 10:11:30.801843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:51.173 [2024-07-15 10:11:30.801874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:77784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.173 [2024-07-15 10:11:30.801895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:51.173 [2024-07-15 10:11:30.801917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:77792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.173 [2024-07-15 10:11:30.801929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:51.173 [2024-07-15 10:11:30.801951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:77800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.173 [2024-07-15 10:11:30.801964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:51.173 [2024-07-15 10:11:30.801986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:77808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.173 [2024-07-15 10:11:30.801998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:51.173 [2024-07-15 10:11:30.802020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:77816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.173 [2024-07-15 10:11:30.802033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:51.173 [2024-07-15 10:11:30.802055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:77824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.173 [2024-07-15 10:11:30.802067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:51.173 [2024-07-15 10:11:30.802089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:77832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.173 [2024-07-15 10:11:30.802102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:51.173 [2024-07-15 10:11:30.802123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:77840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.173 [2024-07-15 10:11:30.802136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:51.173 [2024-07-15 10:11:30.802158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:77848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.173 [2024-07-15 10:11:30.802170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:51.173 [2024-07-15 10:11:30.802192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:78224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.173 [2024-07-15 10:11:30.802205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:51.173 [2024-07-15 10:11:30.802226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:78232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.173 [2024-07-15 10:11:30.802240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:51.173 [2024-07-15 10:11:30.802261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:78240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.173 [2024-07-15 10:11:30.802277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:51.173 [2024-07-15 10:11:30.802299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:78248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.173 [2024-07-15 10:11:30.802312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:51.173 [2024-07-15 10:11:30.802342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:78256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.173 [2024-07-15 10:11:30.802355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:51.173 [2024-07-15 10:11:30.802376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:78264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.173 [2024-07-15 10:11:30.802389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:51.174 [2024-07-15 10:11:30.802410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:78272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.174 [2024-07-15 10:11:30.802424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:51.174 [2024-07-15 10:11:30.802445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:78280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.174 [2024-07-15 10:11:30.802458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:51.174 [2024-07-15 10:11:30.802479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:78288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.174 [2024-07-15 10:11:30.802492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:51.174 [2024-07-15 10:11:30.802513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:78296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.174 [2024-07-15 10:11:30.802526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:51.174 [2024-07-15 10:11:30.802547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:78304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.174 [2024-07-15 10:11:30.802560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.174 [2024-07-15 10:11:30.802581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.174 [2024-07-15 10:11:30.802594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.174 [2024-07-15 10:11:30.802616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.174 [2024-07-15 10:11:30.802628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:51.174 [2024-07-15 10:11:30.802650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.174 [2024-07-15 10:11:30.802675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:51.174 [2024-07-15 10:11:30.802698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.174 [2024-07-15 10:11:30.802711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:51.174 [2024-07-15 10:11:30.802733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:78344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.174 [2024-07-15 10:11:30.802745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:51.174 [2024-07-15 10:11:30.802772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:78352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.174 [2024-07-15 10:11:30.802785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:51.174 [2024-07-15 10:11:30.802807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:78360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.174 [2024-07-15 10:11:30.802830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:51.174 [2024-07-15 10:11:30.802854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:78368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.174 [2024-07-15 10:11:30.802869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:51.174 [2024-07-15 10:11:30.802892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:78376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.174 [2024-07-15 10:11:30.802905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:51.174 [2024-07-15 10:11:30.802926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:78384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.174 [2024-07-15 10:11:30.802939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:51.174 [2024-07-15 10:11:30.802960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:78392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.174 [2024-07-15 10:11:30.802973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:51.174 [2024-07-15 10:11:30.802995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:78400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.174 [2024-07-15 10:11:30.803008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:51.174 [2024-07-15 10:11:30.803029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:78408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.174 [2024-07-15 10:11:30.803042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:51.174 [2024-07-15 10:11:30.803064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:78416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.174 [2024-07-15 10:11:30.803077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:51.174 [2024-07-15 10:11:30.803098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:78424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.174 [2024-07-15 10:11:30.803111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:51.174 [2024-07-15 10:11:30.803132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:78432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.174 [2024-07-15 10:11:30.803145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:51.174 [2024-07-15 10:11:30.803166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:78440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.174 [2024-07-15 10:11:30.803179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:51.174 [2024-07-15 10:11:30.803200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:78448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.174 [2024-07-15 10:11:30.803219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:51.174 [2024-07-15 10:11:30.803241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:78456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.174 [2024-07-15 10:11:30.803254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:51.174 [2024-07-15 10:11:30.803275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.174 [2024-07-15 10:11:30.803289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:51.174 [2024-07-15 10:11:30.803310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:78472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.174 [2024-07-15 10:11:30.803323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:51.174 [2024-07-15 10:11:30.803345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:78480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.174 [2024-07-15 10:11:30.803358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:51.174 [2024-07-15 10:11:30.803380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:78488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.174 [2024-07-15 10:11:30.803392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:51.174 [2024-07-15 10:11:30.804313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:78496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.174 [2024-07-15 10:11:30.804339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:51.174 [2024-07-15 10:11:30.804365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:78504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.174 [2024-07-15 10:11:30.804391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:51.174 [2024-07-15 10:11:30.804427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.174 [2024-07-15 10:11:30.804441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:51.174 [2024-07-15 10:11:30.804464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:78520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.174 [2024-07-15 10:11:30.804477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:51.174 [2024-07-15 10:11:30.804500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.174 [2024-07-15 10:11:30.804514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:51.174 [2024-07-15 10:11:30.804537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:78536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.174 [2024-07-15 10:11:30.804551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:51.174 [2024-07-15 10:11:30.804573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:78544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.174 [2024-07-15 10:11:30.804596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:51.174 [2024-07-15 10:11:30.804619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:78552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.174 [2024-07-15 10:11:30.804633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:51.174 [2024-07-15 10:11:30.804655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.174 [2024-07-15 10:11:30.804683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:51.174 [2024-07-15 10:11:30.804706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:78568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.174 [2024-07-15 10:11:30.804719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.174 [2024-07-15 10:11:30.804741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:78576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.174 [2024-07-15 10:11:30.804754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:51.174 [2024-07-15 10:11:30.804777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:78584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.174 [2024-07-15 10:11:30.804790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:51.174 [2024-07-15 10:11:30.804813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:78592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.174 [2024-07-15 10:11:30.804826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:51.174 [2024-07-15 10:11:30.804848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.175 [2024-07-15 10:11:30.804861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:51.175 [2024-07-15 10:11:30.804884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:78608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.175 [2024-07-15 10:11:30.804897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:51.175 [2024-07-15 10:11:30.804919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:78616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.175 [2024-07-15 10:11:30.804932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:51.175 [2024-07-15 10:11:30.804955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.175 [2024-07-15 10:11:30.804968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:51.175 [2024-07-15 10:11:30.804990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:78632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.175 [2024-07-15 10:11:30.805003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:51.175 [2024-07-15 10:11:30.805025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:78640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.175 [2024-07-15 10:11:30.805039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:51.175 [2024-07-15 10:11:30.805067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:78648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.175 [2024-07-15 10:11:30.805081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:51.175 [2024-07-15 10:11:30.805104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:78656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.175 [2024-07-15 10:11:30.805117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:51.175 [2024-07-15 10:11:30.805139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:78664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.175 [2024-07-15 10:11:30.805153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:51.175 [2024-07-15 10:11:30.805176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.175 [2024-07-15 10:11:30.805189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:51.175 [2024-07-15 10:11:30.805211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:78680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.175 [2024-07-15 10:11:30.805224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:51.175 [2024-07-15 10:11:30.805247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:78688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.175 [2024-07-15 10:11:30.805260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:51.175 [2024-07-15 10:11:30.805283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:78696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.175 [2024-07-15 10:11:30.805296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:51.175 [2024-07-15 10:11:30.805318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:78704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.175 [2024-07-15 10:11:30.805331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:51.175 [2024-07-15 10:11:30.805353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:78712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.175 [2024-07-15 10:11:30.805366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:51.175 [2024-07-15 10:11:30.805389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:78720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.175 [2024-07-15 10:11:30.805402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:51.175 [2024-07-15 10:11:30.805424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:78728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.175 [2024-07-15 10:11:30.805437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:51.175 [2024-07-15 10:11:30.805459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:78736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.175 [2024-07-15 10:11:30.805472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:51.175 [2024-07-15 10:11:30.805500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.175 [2024-07-15 10:11:30.805514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:51.175 [2024-07-15 10:11:30.805536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.175 [2024-07-15 10:11:30.805550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:51.175 [2024-07-15 10:11:30.805572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:77856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.175 [2024-07-15 10:11:30.805588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:51.175 [2024-07-15 10:11:30.805610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:77864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.175 [2024-07-15 10:11:30.805623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:51.175 [2024-07-15 10:11:30.805646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:77872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.175 [2024-07-15 10:11:30.805671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:51.175 [2024-07-15 10:11:30.805695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:77880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.175 [2024-07-15 10:11:30.805708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:51.175 [2024-07-15 10:11:30.805730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:77888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.175 [2024-07-15 10:11:30.805743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:51.175 [2024-07-15 10:11:30.805766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:77896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.175 [2024-07-15 10:11:30.805779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:51.175 [2024-07-15 10:11:30.805801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:77904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.175 [2024-07-15 10:11:30.805814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:51.175 [2024-07-15 10:11:30.805836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:77912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.175 [2024-07-15 10:11:30.805850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:51.175 [2024-07-15 10:11:30.805872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:77920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.175 [2024-07-15 10:11:30.805885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.175 [2024-07-15 10:11:30.819253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:77928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.175 [2024-07-15 10:11:30.819282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:51.175 [2024-07-15 10:11:30.819315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:77936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.175 [2024-07-15 10:11:30.819346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:51.175 [2024-07-15 10:11:30.819379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.175 [2024-07-15 10:11:30.819397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:51.175 [2024-07-15 10:11:30.819429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:77952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.175 [2024-07-15 10:11:30.819448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:51.175 [2024-07-15 10:11:30.819479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.175 [2024-07-15 10:11:30.819498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:51.175 [2024-07-15 10:11:30.819529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:77968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.175 [2024-07-15 10:11:30.819547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:51.175 [2024-07-15 10:11:30.819579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:77976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.175 [2024-07-15 10:11:30.819598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:51.175 [2024-07-15 10:11:30.819630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:77984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.175 [2024-07-15 10:11:30.819648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:51.175 [2024-07-15 10:11:30.819694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:77992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.175 [2024-07-15 10:11:30.819714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:51.175 [2024-07-15 10:11:30.819745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.175 [2024-07-15 10:11:30.819764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:51.175 [2024-07-15 10:11:30.819796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.175 [2024-07-15 10:11:30.819815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:51.175 [2024-07-15 10:11:30.819847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:78016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.176 [2024-07-15 10:11:30.819866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:51.176 [2024-07-15 10:11:30.819898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:78024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.176 [2024-07-15 10:11:30.819917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:51.176 [2024-07-15 10:11:30.821030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:78032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.176 [2024-07-15 10:11:30.821080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:51.176 [2024-07-15 10:11:30.821118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:78040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.176 [2024-07-15 10:11:30.821138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:51.176 [2024-07-15 10:11:30.821170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:78048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.176 [2024-07-15 10:11:30.821189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:51.176 [2024-07-15 10:11:30.821220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.176 [2024-07-15 10:11:30.821239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:51.176 [2024-07-15 10:11:30.821270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:78064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.176 [2024-07-15 10:11:30.821289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:51.176 [2024-07-15 10:11:30.821321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:78072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.176 [2024-07-15 10:11:30.821340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:51.176 [2024-07-15 10:11:30.821372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:78080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.176 [2024-07-15 10:11:30.821390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:51.176 [2024-07-15 10:11:30.821422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:78088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.176 [2024-07-15 10:11:30.821441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:51.176 [2024-07-15 10:11:30.821473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:78096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.176 [2024-07-15 10:11:30.821492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:51.176 [2024-07-15 10:11:30.821524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:78104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.176 [2024-07-15 10:11:30.821543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:51.176 [2024-07-15 10:11:30.821574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:78112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.176 [2024-07-15 10:11:30.821593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:51.176 [2024-07-15 10:11:30.821625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:78120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.176 [2024-07-15 10:11:30.821643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:51.176 [2024-07-15 10:11:30.821693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.176 [2024-07-15 10:11:30.821713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:51.176 [2024-07-15 10:11:30.821754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:78136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.176 [2024-07-15 10:11:30.821773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:51.176 [2024-07-15 10:11:30.821804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:78144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.176 [2024-07-15 10:11:30.821823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:51.176 [2024-07-15 10:11:30.821854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:78152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.176 [2024-07-15 10:11:30.821873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:51.176 [2024-07-15 10:11:30.821904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.176 [2024-07-15 10:11:30.821923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:51.176 [2024-07-15 10:11:30.821953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:78168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.176 [2024-07-15 10:11:30.821972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:51.176 [2024-07-15 10:11:30.822004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:78176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.176 [2024-07-15 10:11:30.822023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.176 [2024-07-15 10:11:30.822054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.176 [2024-07-15 10:11:30.822073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:51.176 [2024-07-15 10:11:30.822105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.176 [2024-07-15 10:11:30.822124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:51.176 [2024-07-15 10:11:30.822155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.176 [2024-07-15 10:11:30.822174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:51.176 [2024-07-15 10:11:30.822205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.176 [2024-07-15 10:11:30.822224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:51.176 [2024-07-15 10:11:30.822256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:78216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.176 [2024-07-15 10:11:30.822275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:51.176 [2024-07-15 10:11:30.822306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:77736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.176 [2024-07-15 10:11:30.822324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:51.176 [2024-07-15 10:11:30.822363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:77744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.176 [2024-07-15 10:11:30.822382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:51.176 [2024-07-15 10:11:30.822413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:77752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.176 [2024-07-15 10:11:30.822431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:51.176 [2024-07-15 10:11:30.822463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.176 [2024-07-15 10:11:30.822482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:51.176 [2024-07-15 10:11:30.822513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:77768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.176 [2024-07-15 10:11:30.822532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:51.176 [2024-07-15 10:11:30.822563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:77776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.176 [2024-07-15 10:11:30.822582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:51.176 [2024-07-15 10:11:30.822614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:77784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.176 [2024-07-15 10:11:30.822632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:51.176 [2024-07-15 10:11:30.822676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:77792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.176 [2024-07-15 10:11:30.822697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:51.176 [2024-07-15 10:11:30.822729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:77800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.176 [2024-07-15 10:11:30.822748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:51.176 [2024-07-15 10:11:30.822779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.176 [2024-07-15 10:11:30.822798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:51.176 [2024-07-15 10:11:30.822829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:77816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.176 [2024-07-15 10:11:30.822848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:51.176 [2024-07-15 10:11:30.822880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:77824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.176 [2024-07-15 10:11:30.822899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:51.176 [2024-07-15 10:11:30.822931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:77832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.176 [2024-07-15 10:11:30.822949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:51.176 [2024-07-15 10:11:30.822980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:77840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.176 [2024-07-15 10:11:30.823007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:51.176 [2024-07-15 10:11:30.823039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:77848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.176 [2024-07-15 10:11:30.823057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:51.176 [2024-07-15 10:11:30.823088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:78224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.176 [2024-07-15 10:11:30.823107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:51.177 [2024-07-15 10:11:30.823138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:78232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.177 [2024-07-15 10:11:30.823157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:51.177 [2024-07-15 10:11:30.823188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:78240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.177 [2024-07-15 10:11:30.823207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:51.177 [2024-07-15 10:11:30.823238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:78248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.177 [2024-07-15 10:11:30.823257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:51.177 [2024-07-15 10:11:30.823288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:78256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.177 [2024-07-15 10:11:30.823307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:51.177 [2024-07-15 10:11:30.823339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:78264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.177 [2024-07-15 10:11:30.823357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:51.177 [2024-07-15 10:11:30.823389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:78272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.177 [2024-07-15 10:11:30.823408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:51.177 [2024-07-15 10:11:30.823439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:78280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.177 [2024-07-15 10:11:30.823458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:51.177 [2024-07-15 10:11:30.823489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:78288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.177 [2024-07-15 10:11:30.823508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:51.177 [2024-07-15 10:11:30.823540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:78296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.177 [2024-07-15 10:11:30.823559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:51.177 [2024-07-15 10:11:30.823590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.177 [2024-07-15 10:11:30.823616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.177 [2024-07-15 10:11:30.823648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:78312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.177 [2024-07-15 10:11:30.823681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.177 [2024-07-15 10:11:30.823712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.177 [2024-07-15 10:11:30.823731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:51.177 [2024-07-15 10:11:30.823762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:78328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.177 [2024-07-15 10:11:30.823781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:51.177 [2024-07-15 10:11:30.823812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:78336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.177 [2024-07-15 10:11:30.823831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:51.177 [2024-07-15 10:11:30.823862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:78344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.177 [2024-07-15 10:11:30.823881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:51.177 [2024-07-15 10:11:30.823929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:78352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.177 [2024-07-15 10:11:30.823955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:51.177 [2024-07-15 10:11:30.823997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:78360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.177 [2024-07-15 10:11:30.824023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:51.177 [2024-07-15 10:11:30.824065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:78368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.177 [2024-07-15 10:11:30.824091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:51.177 [2024-07-15 10:11:30.824133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:78376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.177 [2024-07-15 10:11:30.824159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:51.177 [2024-07-15 10:11:30.824200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:78384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.177 [2024-07-15 10:11:30.824227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:51.177 [2024-07-15 10:11:30.824269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:78392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.177 [2024-07-15 10:11:30.824294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:51.177 [2024-07-15 10:11:30.824336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:78400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.177 [2024-07-15 10:11:30.824362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:51.177 [2024-07-15 10:11:30.824434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:78408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.177 [2024-07-15 10:11:30.824461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:51.177 [2024-07-15 10:11:30.824503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:78416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.177 [2024-07-15 10:11:30.824529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:51.177 [2024-07-15 10:11:30.824571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:78424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.177 [2024-07-15 10:11:30.824596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:51.177 [2024-07-15 10:11:30.824640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:78432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.177 [2024-07-15 10:11:30.824684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:51.177 [2024-07-15 10:11:30.824729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:78440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.177 [2024-07-15 10:11:30.824755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:51.177 [2024-07-15 10:11:30.824797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:78448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.177 [2024-07-15 10:11:30.824822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:51.177 [2024-07-15 10:11:30.824865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:78456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.177 [2024-07-15 10:11:30.824890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:51.177 [2024-07-15 10:11:30.824932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:78464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.177 [2024-07-15 10:11:30.824958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:51.177 [2024-07-15 10:11:30.824999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:78472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.177 [2024-07-15 10:11:30.825025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:51.177 [2024-07-15 10:11:30.825068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:78480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.177 [2024-07-15 10:11:30.825094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:51.177 [2024-07-15 10:11:30.826787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:78488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.178 [2024-07-15 10:11:30.826833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:51.178 [2024-07-15 10:11:30.826882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:78496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.178 [2024-07-15 10:11:30.826909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:51.178 [2024-07-15 10:11:30.826970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:78504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.178 [2024-07-15 10:11:30.826996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:51.178 [2024-07-15 10:11:30.827038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:78512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.178 [2024-07-15 10:11:30.827063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:51.178 [2024-07-15 10:11:30.827106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:78520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.178 [2024-07-15 10:11:30.827132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:51.178 [2024-07-15 10:11:30.827174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:78528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.178 [2024-07-15 10:11:30.827200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:51.178 [2024-07-15 10:11:30.827242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:78536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.178 [2024-07-15 10:11:30.827267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:51.178 [2024-07-15 10:11:30.827309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:78544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.178 [2024-07-15 10:11:30.827335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:51.178 [2024-07-15 10:11:30.827377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:78552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.178 [2024-07-15 10:11:30.827402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:51.178 [2024-07-15 10:11:30.827445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:78560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.178 [2024-07-15 10:11:30.827470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:51.178 [2024-07-15 10:11:30.827513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:78568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.178 [2024-07-15 10:11:30.827539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.178 [2024-07-15 10:11:30.827581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:78576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.178 [2024-07-15 10:11:30.827606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:51.178 [2024-07-15 10:11:30.827648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:78584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.178 [2024-07-15 10:11:30.827694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:51.178 [2024-07-15 10:11:30.827737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:78592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.178 [2024-07-15 10:11:30.827763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:51.178 [2024-07-15 10:11:30.827805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.178 [2024-07-15 10:11:30.827842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:51.178 [2024-07-15 10:11:30.827886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.178 [2024-07-15 10:11:30.827912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:51.178 [2024-07-15 10:11:30.827954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:78616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.178 [2024-07-15 10:11:30.827980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:51.178 [2024-07-15 10:11:30.828023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.178 [2024-07-15 10:11:30.828049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:51.178 [2024-07-15 10:11:30.828092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:78632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.178 [2024-07-15 10:11:30.828117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:51.178 [2024-07-15 10:11:30.828160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:78640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.178 [2024-07-15 10:11:30.828186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:51.178 [2024-07-15 10:11:30.828229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:78648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.178 [2024-07-15 10:11:30.828254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:51.178 [2024-07-15 10:11:30.828298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:78656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.178 [2024-07-15 10:11:30.828323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:51.178 [2024-07-15 10:11:30.828366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.178 [2024-07-15 10:11:30.828409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:51.178 [2024-07-15 10:11:30.828452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.178 [2024-07-15 10:11:30.828478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:51.178 [2024-07-15 10:11:30.828521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:78680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.178 [2024-07-15 10:11:30.828547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:51.178 [2024-07-15 10:11:30.828590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:78688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.178 [2024-07-15 10:11:30.828616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:51.178 [2024-07-15 10:11:30.828674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:78696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.178 [2024-07-15 10:11:30.828721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:51.178 [2024-07-15 10:11:30.828764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:78704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.178 [2024-07-15 10:11:30.828790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:51.178 [2024-07-15 10:11:30.828833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:78712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.178 [2024-07-15 10:11:30.828858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:51.178 [2024-07-15 10:11:30.828902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:78720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.178 [2024-07-15 10:11:30.828929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:51.178 [2024-07-15 10:11:30.828971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:78728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.178 [2024-07-15 10:11:30.828997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:51.178 [2024-07-15 10:11:30.829041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:78736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.178 [2024-07-15 10:11:30.829066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:51.178 [2024-07-15 10:11:30.829109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.178 [2024-07-15 10:11:30.829135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:51.178 [2024-07-15 10:11:30.829178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.178 [2024-07-15 10:11:30.829204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:51.178 [2024-07-15 10:11:30.829247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:77856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.178 [2024-07-15 10:11:30.829273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:51.178 [2024-07-15 10:11:30.829317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:77864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.178 [2024-07-15 10:11:30.829344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:51.178 [2024-07-15 10:11:30.829387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:77872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.178 [2024-07-15 10:11:30.829412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:51.178 [2024-07-15 10:11:30.829455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:77880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.178 [2024-07-15 10:11:30.829481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:51.178 [2024-07-15 10:11:30.829525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:77888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.178 [2024-07-15 10:11:30.829551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:51.178 [2024-07-15 10:11:30.829603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:77896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.178 [2024-07-15 10:11:30.829629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:51.178 [2024-07-15 10:11:30.829689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:77904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.179 [2024-07-15 10:11:30.829718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:51.179 [2024-07-15 10:11:30.829761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:77912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.179 [2024-07-15 10:11:30.829792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:51.179 [2024-07-15 10:11:30.829834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:77920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.179 [2024-07-15 10:11:30.829860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.179 [2024-07-15 10:11:30.829902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:77928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.179 [2024-07-15 10:11:30.829928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:51.179 [2024-07-15 10:11:30.829971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:77936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.179 [2024-07-15 10:11:30.829996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:51.179 [2024-07-15 10:11:30.830039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:77944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.179 [2024-07-15 10:11:30.830065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:51.179 [2024-07-15 10:11:30.830108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:77952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.179 [2024-07-15 10:11:30.830134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:51.179 [2024-07-15 10:11:30.830176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.179 [2024-07-15 10:11:30.830202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:51.179 [2024-07-15 10:11:30.830244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:77968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.179 [2024-07-15 10:11:30.830271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:51.179 [2024-07-15 10:11:30.830314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:77976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.179 [2024-07-15 10:11:30.830339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:51.179 [2024-07-15 10:11:30.830382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:77984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.179 [2024-07-15 10:11:30.830408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:51.179 [2024-07-15 10:11:30.830462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:77992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.179 [2024-07-15 10:11:30.830489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:51.179 [2024-07-15 10:11:30.830532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.179 [2024-07-15 10:11:30.830558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:51.179 [2024-07-15 10:11:30.830601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.179 [2024-07-15 10:11:30.830627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:51.179 [2024-07-15 10:11:30.830686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:78016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.179 [2024-07-15 10:11:30.830715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:51.179 [2024-07-15 10:11:30.832111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.179 [2024-07-15 10:11:30.832157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:51.179 [2024-07-15 10:11:30.832205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:78032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.179 [2024-07-15 10:11:30.832231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:51.179 [2024-07-15 10:11:30.832275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:78040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.179 [2024-07-15 10:11:30.832301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:51.179 [2024-07-15 10:11:30.832344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:78048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.179 [2024-07-15 10:11:30.832387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:51.179 [2024-07-15 10:11:30.832432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:78056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.179 [2024-07-15 10:11:30.832458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:51.179 [2024-07-15 10:11:30.832501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:78064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.179 [2024-07-15 10:11:30.832526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:51.179 [2024-07-15 10:11:30.832569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.179 [2024-07-15 10:11:30.832596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:51.179 [2024-07-15 10:11:30.832639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:78080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.179 [2024-07-15 10:11:30.832688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:51.179 [2024-07-15 10:11:30.832751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:78088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.179 [2024-07-15 10:11:30.832777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:51.179 [2024-07-15 10:11:30.832820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:78096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.179 [2024-07-15 10:11:30.832847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:51.179 [2024-07-15 10:11:30.832890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.179 [2024-07-15 10:11:30.832916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:51.179 [2024-07-15 10:11:30.832958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.179 [2024-07-15 10:11:30.832984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:51.179 [2024-07-15 10:11:30.833028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:78120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.179 [2024-07-15 10:11:30.833053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:51.179 [2024-07-15 10:11:30.833095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:78128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.179 [2024-07-15 10:11:30.833122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:51.179 [2024-07-15 10:11:30.833164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:78136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.179 [2024-07-15 10:11:30.833190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:51.179 [2024-07-15 10:11:30.833232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:78144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.179 [2024-07-15 10:11:30.833258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:51.179 [2024-07-15 10:11:30.833301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:78152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.179 [2024-07-15 10:11:30.833328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:51.179 [2024-07-15 10:11:30.833371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:78160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.179 [2024-07-15 10:11:30.833396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:51.179 [2024-07-15 10:11:30.833439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:78168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.179 [2024-07-15 10:11:30.833465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:51.179 [2024-07-15 10:11:30.833508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:78176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.179 [2024-07-15 10:11:30.833534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.179 [2024-07-15 10:11:30.833577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.179 [2024-07-15 10:11:30.833614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:51.179 [2024-07-15 10:11:30.833676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:78192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.179 [2024-07-15 10:11:30.833705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:51.179 [2024-07-15 10:11:30.833747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.179 [2024-07-15 10:11:30.833773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:51.179 [2024-07-15 10:11:30.833815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:78208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.179 [2024-07-15 10:11:30.833841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:51.179 [2024-07-15 10:11:30.833889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:78216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.179 [2024-07-15 10:11:30.833908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:51.179 [2024-07-15 10:11:30.833937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:77736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.179 [2024-07-15 10:11:30.833955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:51.179 [2024-07-15 10:11:30.833985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:77744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.180 [2024-07-15 10:11:30.834002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:51.180 [2024-07-15 10:11:30.834031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:77752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.180 [2024-07-15 10:11:30.834049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:51.180 [2024-07-15 10:11:30.834078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:77760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.180 [2024-07-15 10:11:30.834096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:51.180 [2024-07-15 10:11:30.834125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:77768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.180 [2024-07-15 10:11:30.834143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:51.180 [2024-07-15 10:11:30.834173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:77776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.180 [2024-07-15 10:11:30.834191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:51.180 [2024-07-15 10:11:30.834220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:77784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.180 [2024-07-15 10:11:30.834238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:51.180 [2024-07-15 10:11:30.834267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:77792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.180 [2024-07-15 10:11:30.834296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:51.180 [2024-07-15 10:11:30.834327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:77800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.180 [2024-07-15 10:11:30.834345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:51.180 [2024-07-15 10:11:30.834375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:77808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.180 [2024-07-15 10:11:30.834394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:51.180 [2024-07-15 10:11:30.834423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:77816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.180 [2024-07-15 10:11:30.834441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:51.180 [2024-07-15 10:11:30.834470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:77824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.180 [2024-07-15 10:11:30.834488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:51.180 [2024-07-15 10:11:30.834518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:77832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.180 [2024-07-15 10:11:30.834536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:51.180 [2024-07-15 10:11:30.834565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.180 [2024-07-15 10:11:30.834583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:51.180 [2024-07-15 10:11:30.834613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:77848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.180 [2024-07-15 10:11:30.834631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:51.180 [2024-07-15 10:11:30.834660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:78224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.180 [2024-07-15 10:11:30.834692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:51.180 [2024-07-15 10:11:30.834722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:78232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.180 [2024-07-15 10:11:30.834740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:51.180 [2024-07-15 10:11:30.834770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:78240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.180 [2024-07-15 10:11:30.834788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:51.180 [2024-07-15 10:11:30.834817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:78248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.180 [2024-07-15 10:11:30.834835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:51.180 [2024-07-15 10:11:30.834864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:78256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.180 [2024-07-15 10:11:30.834882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:51.180 [2024-07-15 10:11:30.834919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:78264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.180 [2024-07-15 10:11:30.834938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:51.180 [2024-07-15 10:11:30.834967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:78272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.180 [2024-07-15 10:11:30.834985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:51.180 [2024-07-15 10:11:30.835015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:78280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.180 [2024-07-15 10:11:30.835033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:51.180 [2024-07-15 10:11:30.835062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:78288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.180 [2024-07-15 10:11:30.835080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:51.180 [2024-07-15 10:11:30.835109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.180 [2024-07-15 10:11:30.835127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:51.180 [2024-07-15 10:11:30.835156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.180 [2024-07-15 10:11:30.835174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.180 [2024-07-15 10:11:30.835204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.180 [2024-07-15 10:11:30.835221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.180 [2024-07-15 10:11:30.835250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.180 [2024-07-15 10:11:30.835269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:51.180 [2024-07-15 10:11:30.835298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:78328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.180 [2024-07-15 10:11:30.835316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:51.180 [2024-07-15 10:11:30.835345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:78336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.180 [2024-07-15 10:11:30.835363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:51.180 [2024-07-15 10:11:30.835392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:78344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.180 [2024-07-15 10:11:30.835410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:51.180 [2024-07-15 10:11:30.835439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:78352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.180 [2024-07-15 10:11:30.835458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:51.180 [2024-07-15 10:11:30.835494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:78360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.180 [2024-07-15 10:11:30.835512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:51.180 [2024-07-15 10:11:30.835542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:78368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.180 [2024-07-15 10:11:30.835560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:51.180 [2024-07-15 10:11:30.835590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:78376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.180 [2024-07-15 10:11:30.835607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:51.180 [2024-07-15 10:11:30.835636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:78384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.180 [2024-07-15 10:11:30.835655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:51.180 [2024-07-15 10:11:30.835699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:78392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.180 [2024-07-15 10:11:30.835717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:51.180 [2024-07-15 10:11:30.835746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:78400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.180 [2024-07-15 10:11:30.835764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:51.180 [2024-07-15 10:11:30.835793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:78408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.180 [2024-07-15 10:11:30.835811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:51.180 [2024-07-15 10:11:30.835840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:78416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.180 [2024-07-15 10:11:30.835858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:51.180 [2024-07-15 10:11:30.835888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:78424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.180 [2024-07-15 10:11:30.835906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:51.180 [2024-07-15 10:11:30.835936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:78432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.181 [2024-07-15 10:11:30.835954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:51.181 [2024-07-15 10:11:30.835983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:78440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.181 [2024-07-15 10:11:30.836000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:51.181 [2024-07-15 10:11:30.836029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.181 [2024-07-15 10:11:30.836047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:51.181 [2024-07-15 10:11:30.836076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:78456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.181 [2024-07-15 10:11:30.836101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:51.181 [2024-07-15 10:11:30.836131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:78464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.181 [2024-07-15 10:11:30.836148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:51.181 [2024-07-15 10:11:30.836178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:78472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.181 [2024-07-15 10:11:30.836196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:51.181 [2024-07-15 10:11:30.837360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:78480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.181 [2024-07-15 10:11:30.837392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:51.181 [2024-07-15 10:11:30.837427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:78488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.181 [2024-07-15 10:11:30.837446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:51.181 [2024-07-15 10:11:30.837476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.181 [2024-07-15 10:11:30.837494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:51.181 [2024-07-15 10:11:30.837523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:78504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.181 [2024-07-15 10:11:30.837541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:51.181 [2024-07-15 10:11:30.837571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.181 [2024-07-15 10:11:30.837589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:51.181 [2024-07-15 10:11:30.837618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:78520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.181 [2024-07-15 10:11:30.837638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:51.181 [2024-07-15 10:11:30.837684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:78528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.181 [2024-07-15 10:11:30.837704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:51.181 [2024-07-15 10:11:30.837734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:78536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.181 [2024-07-15 10:11:30.837752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:51.181 [2024-07-15 10:11:30.837781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.181 [2024-07-15 10:11:30.837799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:51.181 [2024-07-15 10:11:30.837828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:78552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.181 [2024-07-15 10:11:30.837858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:51.181 [2024-07-15 10:11:30.837889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:78560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.181 [2024-07-15 10:11:30.837906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:51.181 [2024-07-15 10:11:30.837936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:78568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.181 [2024-07-15 10:11:30.837954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.181 [2024-07-15 10:11:30.837984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:78576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.181 [2024-07-15 10:11:30.838002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:51.181 [2024-07-15 10:11:30.838031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:78584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.181 [2024-07-15 10:11:30.838048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:51.181 [2024-07-15 10:11:30.838078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:78592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.181 [2024-07-15 10:11:30.838096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:51.181 [2024-07-15 10:11:30.838125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.181 [2024-07-15 10:11:30.838143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:51.181 [2024-07-15 10:11:30.838172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:78608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.181 [2024-07-15 10:11:30.838190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:51.181 [2024-07-15 10:11:30.838220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:78616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.181 [2024-07-15 10:11:30.838238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:51.181 [2024-07-15 10:11:30.838268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.181 [2024-07-15 10:11:30.838285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:51.181 [2024-07-15 10:11:30.838314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:78632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.181 [2024-07-15 10:11:30.838332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:51.181 [2024-07-15 10:11:30.838361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:78640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.181 [2024-07-15 10:11:30.838379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:51.181 [2024-07-15 10:11:30.838408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:78648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.181 [2024-07-15 10:11:30.838426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:51.181 [2024-07-15 10:11:30.838462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:78656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.181 [2024-07-15 10:11:30.838480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:51.181 [2024-07-15 10:11:30.838510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:78664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.181 [2024-07-15 10:11:30.838528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:51.181 [2024-07-15 10:11:30.838558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.181 [2024-07-15 10:11:30.838576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:51.181 [2024-07-15 10:11:30.838606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:78680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.181 [2024-07-15 10:11:30.838624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:51.182 [2024-07-15 10:11:30.838655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:78688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.182 [2024-07-15 10:11:30.838687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:51.182 [2024-07-15 10:11:30.838716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:78696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.182 [2024-07-15 10:11:30.838734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:51.182 [2024-07-15 10:11:30.838764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:78704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.182 [2024-07-15 10:11:30.838782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:51.182 [2024-07-15 10:11:30.838812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:78712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.182 [2024-07-15 10:11:30.838830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:51.182 [2024-07-15 10:11:30.838859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:78720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.182 [2024-07-15 10:11:30.838877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:51.182 [2024-07-15 10:11:30.838910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:78728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.182 [2024-07-15 10:11:30.838929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:51.182 [2024-07-15 10:11:30.838958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.182 [2024-07-15 10:11:30.838976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:51.182 [2024-07-15 10:11:30.839005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.182 [2024-07-15 10:11:30.839023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:51.182 [2024-07-15 10:11:30.839062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.182 [2024-07-15 10:11:30.839081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:51.182 [2024-07-15 10:11:30.839111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:77856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.182 [2024-07-15 10:11:30.839129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:51.182 [2024-07-15 10:11:30.839159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:77864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.182 [2024-07-15 10:11:30.839177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:51.182 [2024-07-15 10:11:30.839206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:77872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.182 [2024-07-15 10:11:30.839224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:51.182 [2024-07-15 10:11:30.839254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:77880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.182 [2024-07-15 10:11:30.839272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:51.182 [2024-07-15 10:11:30.839302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:77888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.182 [2024-07-15 10:11:30.839320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:51.182 [2024-07-15 10:11:30.839350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:77896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.182 [2024-07-15 10:11:30.839368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:51.182 [2024-07-15 10:11:30.839397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:77904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.182 [2024-07-15 10:11:30.839416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:51.182 [2024-07-15 10:11:30.839445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:77912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.182 [2024-07-15 10:11:30.839463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:51.182 [2024-07-15 10:11:30.839493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:77920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.182 [2024-07-15 10:11:30.839511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.182 [2024-07-15 10:11:30.839541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.182 [2024-07-15 10:11:30.839559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:51.182 [2024-07-15 10:11:30.839588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:77936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.182 [2024-07-15 10:11:30.839606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:51.182 [2024-07-15 10:11:30.839636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.182 [2024-07-15 10:11:30.839673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:51.182 [2024-07-15 10:11:30.839707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:77952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.182 [2024-07-15 10:11:30.839726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:51.182 [2024-07-15 10:11:30.839755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:77960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.182 [2024-07-15 10:11:30.839773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:51.182 [2024-07-15 10:11:30.839803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:77968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.182 [2024-07-15 10:11:30.839821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:51.182 [2024-07-15 10:11:30.839854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:77976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.182 [2024-07-15 10:11:30.839872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:51.182 [2024-07-15 10:11:30.839902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:77984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.182 [2024-07-15 10:11:30.839921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:51.182 [2024-07-15 10:11:30.839950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:77992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.182 [2024-07-15 10:11:30.839968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:51.182 [2024-07-15 10:11:30.839998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.182 [2024-07-15 10:11:30.840016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:51.182 [2024-07-15 10:11:30.840046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.182 [2024-07-15 10:11:30.840064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:51.182 [2024-07-15 10:11:30.841003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:78016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.182 [2024-07-15 10:11:30.841035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:51.182 [2024-07-15 10:11:30.841069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:78024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.182 [2024-07-15 10:11:30.841087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:51.182 [2024-07-15 10:11:30.841118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:78032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.182 [2024-07-15 10:11:30.841136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:51.182 [2024-07-15 10:11:30.841166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.182 [2024-07-15 10:11:30.841196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:51.182 [2024-07-15 10:11:30.841226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:78048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.182 [2024-07-15 10:11:30.841244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:51.182 [2024-07-15 10:11:30.841273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:78056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.182 [2024-07-15 10:11:30.841291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:51.182 [2024-07-15 10:11:30.841320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:78064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.182 [2024-07-15 10:11:30.841338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:51.182 [2024-07-15 10:11:30.841367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:78072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.182 [2024-07-15 10:11:30.841385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:51.182 [2024-07-15 10:11:30.841418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:78080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.182 [2024-07-15 10:11:30.841436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:51.182 [2024-07-15 10:11:30.841465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:78088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.182 [2024-07-15 10:11:30.841483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:51.182 [2024-07-15 10:11:30.841512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:78096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.182 [2024-07-15 10:11:30.841530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:51.182 [2024-07-15 10:11:30.841561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:78104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.182 [2024-07-15 10:11:30.841579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:51.182 [2024-07-15 10:11:30.841608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.182 [2024-07-15 10:11:30.841626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:51.183 [2024-07-15 10:11:30.841655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:78120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.183 [2024-07-15 10:11:30.841690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:51.183 [2024-07-15 10:11:30.841719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:78128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.183 [2024-07-15 10:11:30.841737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:51.183 [2024-07-15 10:11:30.841767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:78136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.183 [2024-07-15 10:11:30.841784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:51.183 [2024-07-15 10:11:30.841821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.183 [2024-07-15 10:11:30.841839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:51.183 [2024-07-15 10:11:30.841869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:78152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.183 [2024-07-15 10:11:30.841887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:51.183 [2024-07-15 10:11:30.841916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.183 [2024-07-15 10:11:30.841934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:51.183 [2024-07-15 10:11:30.841964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:78168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.183 [2024-07-15 10:11:30.841982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:51.183 [2024-07-15 10:11:30.842011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.183 [2024-07-15 10:11:30.842029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:51.183 [2024-07-15 10:11:30.842059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.183 [2024-07-15 10:11:30.842076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:51.183 [2024-07-15 10:11:30.842105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.183 [2024-07-15 10:11:30.842123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:51.183 [2024-07-15 10:11:30.842154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:78200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.183 [2024-07-15 10:11:30.842171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:51.183 [2024-07-15 10:11:30.842203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:78208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.183 [2024-07-15 10:11:30.842221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:51.183 [2024-07-15 10:11:30.842250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:78216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.183 [2024-07-15 10:11:30.842268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:51.183 [2024-07-15 10:11:30.842298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:77736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.183 [2024-07-15 10:11:30.842315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:51.183 [2024-07-15 10:11:30.842345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.183 [2024-07-15 10:11:30.842362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:51.183 [2024-07-15 10:11:30.842403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:77752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.183 [2024-07-15 10:11:30.842421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:51.183 [2024-07-15 10:11:30.842451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:77760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.183 [2024-07-15 10:11:30.842469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:51.183 [2024-07-15 10:11:30.842498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:77768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.183 [2024-07-15 10:11:30.842515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:51.183 [2024-07-15 10:11:30.842545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:77776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.183 [2024-07-15 10:11:30.842563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:51.183 [2024-07-15 10:11:30.842592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:77784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.183 [2024-07-15 10:11:30.842610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:51.183 [2024-07-15 10:11:30.854143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.183 [2024-07-15 10:11:30.854196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:51.183 [2024-07-15 10:11:30.854229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:77800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.183 [2024-07-15 10:11:30.854247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:51.183 [2024-07-15 10:11:30.854275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:77808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.183 [2024-07-15 10:11:30.854292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:51.183 [2024-07-15 10:11:30.854320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:77816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.183 [2024-07-15 10:11:30.854337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:51.183 [2024-07-15 10:11:30.854365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:77824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.183 [2024-07-15 10:11:30.854382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:51.183 [2024-07-15 10:11:30.854410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:77832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.183 [2024-07-15 10:11:30.854426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:51.183 [2024-07-15 10:11:30.854454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:77840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.183 [2024-07-15 10:11:30.854471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:51.183 [2024-07-15 10:11:30.854500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:77848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.183 [2024-07-15 10:11:30.854536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:51.183 [2024-07-15 10:11:30.854565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:78224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.183 [2024-07-15 10:11:30.854582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:51.183 [2024-07-15 10:11:30.854610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:78232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.183 [2024-07-15 10:11:30.854627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:51.183 [2024-07-15 10:11:30.854677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:78240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.183 [2024-07-15 10:11:30.854696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:51.183 [2024-07-15 10:11:30.854724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:78248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.183 [2024-07-15 10:11:30.854741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:51.183 [2024-07-15 10:11:30.854769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:78256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.183 [2024-07-15 10:11:30.854785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:51.183 [2024-07-15 10:11:30.854813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:78264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.183 [2024-07-15 10:11:30.854830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:51.183 [2024-07-15 10:11:30.854858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:78272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.183 [2024-07-15 10:11:30.854875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:51.183 [2024-07-15 10:11:30.854903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:78280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.183 [2024-07-15 10:11:30.854920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:51.183 [2024-07-15 10:11:30.854948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.183 [2024-07-15 10:11:30.854965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:51.183 [2024-07-15 10:11:30.854993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:78296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.183 [2024-07-15 10:11:30.855010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:51.183 [2024-07-15 10:11:30.855038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.183 [2024-07-15 10:11:30.855054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.183 [2024-07-15 10:11:30.855083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:78312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.183 [2024-07-15 10:11:30.855110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.183 [2024-07-15 10:11:30.855137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:78320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.184 [2024-07-15 10:11:30.855154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:51.184 [2024-07-15 10:11:30.855182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:78328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.184 [2024-07-15 10:11:30.855199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:51.184 [2024-07-15 10:11:30.855226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:78336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.184 [2024-07-15 10:11:30.855243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:51.184 [2024-07-15 10:11:30.855272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:78344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.184 [2024-07-15 10:11:30.855289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:51.184 [2024-07-15 10:11:30.855317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:78352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.184 [2024-07-15 10:11:30.855333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:51.184 [2024-07-15 10:11:30.855360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:78360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.184 [2024-07-15 10:11:30.855377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:51.184 [2024-07-15 10:11:30.855404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:78368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.184 [2024-07-15 10:11:30.855421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:51.184 [2024-07-15 10:11:30.855449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:78376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.184 [2024-07-15 10:11:30.855466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:51.184 [2024-07-15 10:11:30.855494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:78384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.184 [2024-07-15 10:11:30.855511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:51.184 [2024-07-15 10:11:30.855538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:78392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.184 [2024-07-15 10:11:30.855555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:51.184 [2024-07-15 10:11:30.855582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:78400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.184 [2024-07-15 10:11:30.855599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:51.184 [2024-07-15 10:11:30.855626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:78408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.184 [2024-07-15 10:11:30.855643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:51.184 [2024-07-15 10:11:30.855694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:78416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.184 [2024-07-15 10:11:30.855711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:51.184 [2024-07-15 10:11:30.855739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:78424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.184 [2024-07-15 10:11:30.855755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:51.184 [2024-07-15 10:11:30.855783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:78432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.184 [2024-07-15 10:11:30.855799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:51.184 [2024-07-15 10:11:30.855828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:78440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.184 [2024-07-15 10:11:30.855844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:51.184 [2024-07-15 10:11:30.855871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:78448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.184 [2024-07-15 10:11:30.855888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:51.184 [2024-07-15 10:11:30.855916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:78456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.184 [2024-07-15 10:11:30.855933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:51.184 [2024-07-15 10:11:30.855961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:78464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.184 [2024-07-15 10:11:30.855978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:51.184 [2024-07-15 10:11:30.856445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:78472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.184 [2024-07-15 10:11:30.856475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:51.184 [2024-07-15 10:11:30.856535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:78480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.184 [2024-07-15 10:11:30.856554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:51.184 [2024-07-15 10:11:30.856592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:78488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.184 [2024-07-15 10:11:30.856609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:51.184 [2024-07-15 10:11:30.856645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:78496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.184 [2024-07-15 10:11:30.856680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:51.184 [2024-07-15 10:11:30.856718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:78504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.184 [2024-07-15 10:11:30.856734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:51.184 [2024-07-15 10:11:30.856784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:78512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.184 [2024-07-15 10:11:30.856801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:51.184 [2024-07-15 10:11:30.856838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:78520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.184 [2024-07-15 10:11:30.856855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:51.184 [2024-07-15 10:11:30.856892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:78528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.184 [2024-07-15 10:11:30.856908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:51.184 [2024-07-15 10:11:30.856944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:78536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.184 [2024-07-15 10:11:30.856961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:51.184 [2024-07-15 10:11:30.856998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:78544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.184 [2024-07-15 10:11:30.857014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:51.184 [2024-07-15 10:11:30.857050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:78552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.184 [2024-07-15 10:11:30.857066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:51.184 [2024-07-15 10:11:30.857103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:78560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.184 [2024-07-15 10:11:30.857120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:51.184 [2024-07-15 10:11:30.857156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:78568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.184 [2024-07-15 10:11:30.857172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:51.184 [2024-07-15 10:11:30.857208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:78576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.184 [2024-07-15 10:11:30.857225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:51.184 [2024-07-15 10:11:30.857260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:78584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.184 [2024-07-15 10:11:30.857277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:51.184 [2024-07-15 10:11:30.857313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.184 [2024-07-15 10:11:30.857330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:51.184 [2024-07-15 10:11:30.857366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.184 [2024-07-15 10:11:30.857383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:51.184 [2024-07-15 10:11:30.857419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:78608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.184 [2024-07-15 10:11:30.857445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:51.184 [2024-07-15 10:11:30.857481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:78616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.184 [2024-07-15 10:11:30.857498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:51.184 [2024-07-15 10:11:30.857534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.184 [2024-07-15 10:11:30.857550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:51.184 [2024-07-15 10:11:30.857586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:78632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.184 [2024-07-15 10:11:30.857603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:51.184 [2024-07-15 10:11:30.857639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:78640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.184 [2024-07-15 10:11:30.857670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:51.184 [2024-07-15 10:11:30.857708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.185 [2024-07-15 10:11:30.857725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:51.185 [2024-07-15 10:11:30.857761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:78656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.185 [2024-07-15 10:11:30.857778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:51.185 [2024-07-15 10:11:30.857815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:78664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.185 [2024-07-15 10:11:30.857832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:51.185 [2024-07-15 10:11:30.857868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.185 [2024-07-15 10:11:30.857885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:51.185 [2024-07-15 10:11:30.857921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:78680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.185 [2024-07-15 10:11:30.857938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:51.185 [2024-07-15 10:11:30.857974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:78688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.185 [2024-07-15 10:11:30.857991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:51.185 [2024-07-15 10:11:30.858028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:78696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.185 [2024-07-15 10:11:30.858045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:51.185 [2024-07-15 10:11:30.858081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:78704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.185 [2024-07-15 10:11:30.858105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:51.185 [2024-07-15 10:11:30.858142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:78712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.185 [2024-07-15 10:11:30.858159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:51.185 [2024-07-15 10:11:30.858195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:78720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.185 [2024-07-15 10:11:30.858212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:51.185 [2024-07-15 10:11:30.858250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:78728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.185 [2024-07-15 10:11:30.858267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:51.185 [2024-07-15 10:11:30.858304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:78736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.185 [2024-07-15 10:11:30.858321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:51.185 [2024-07-15 10:11:30.858357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.185 [2024-07-15 10:11:30.858375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:51.185 [2024-07-15 10:11:30.858411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.185 [2024-07-15 10:11:30.858428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:51.185 [2024-07-15 10:11:30.858465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:77856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.185 [2024-07-15 10:11:30.858482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:51.185 [2024-07-15 10:11:30.858519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:77864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.185 [2024-07-15 10:11:30.858536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:51.185 [2024-07-15 10:11:30.858573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:77872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.185 [2024-07-15 10:11:30.858590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:51.185 [2024-07-15 10:11:30.858626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:77880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.185 [2024-07-15 10:11:30.858643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:51.185 [2024-07-15 10:11:30.858694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:77888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.185 [2024-07-15 10:11:30.858712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:51.185 [2024-07-15 10:11:30.858749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:77896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.185 [2024-07-15 10:11:30.858765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:51.185 [2024-07-15 10:11:30.858810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:77904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.185 [2024-07-15 10:11:30.858827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:51.185 [2024-07-15 10:11:30.858863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:77912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.185 [2024-07-15 10:11:30.858880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:51.185 [2024-07-15 10:11:30.858916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:77920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.185 [2024-07-15 10:11:30.858933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:51.185 [2024-07-15 10:11:30.858970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:77928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.185 [2024-07-15 10:11:30.858987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:51.185 [2024-07-15 10:11:30.859023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:77936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.185 [2024-07-15 10:11:30.859040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:51.185 [2024-07-15 10:11:30.859077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.185 [2024-07-15 10:11:30.859094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:51.185 [2024-07-15 10:11:30.859131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:77952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.185 [2024-07-15 10:11:30.859147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:51.185 [2024-07-15 10:11:30.859184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:77960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.185 [2024-07-15 10:11:30.859201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:51.185 [2024-07-15 10:11:30.859238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:77968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.185 [2024-07-15 10:11:30.859254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:51.185 [2024-07-15 10:11:30.859291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:77976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.185 [2024-07-15 10:11:30.859307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:51.185 [2024-07-15 10:11:30.859344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:77984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.185 [2024-07-15 10:11:30.859361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:51.185 [2024-07-15 10:11:30.859398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:77992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.185 [2024-07-15 10:11:30.859416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:51.185 [2024-07-15 10:11:30.859460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.185 [2024-07-15 10:11:30.859477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:51.185 [2024-07-15 10:11:30.859714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.186 [2024-07-15 10:11:30.859737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:51.186 [2024-07-15 10:11:43.899452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:67768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.186 [2024-07-15 10:11:43.899502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.186 [2024-07-15 10:11:43.899523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:68040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.186 [2024-07-15 10:11:43.899534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.186 [2024-07-15 10:11:43.899547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:68048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.186 [2024-07-15 10:11:43.899557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.186 [2024-07-15 10:11:43.899569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:68056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.186 [2024-07-15 10:11:43.899578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.186 [2024-07-15 10:11:43.899589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:68064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.186 [2024-07-15 10:11:43.899599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.186 [2024-07-15 10:11:43.899611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:68072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.186 [2024-07-15 10:11:43.899621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.186 [2024-07-15 10:11:43.899632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:68080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.186 [2024-07-15 10:11:43.899641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.186 [2024-07-15 10:11:43.899653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:68088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.186 [2024-07-15 10:11:43.899663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.186 [2024-07-15 10:11:43.899685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:68096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.186 [2024-07-15 10:11:43.899696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.186 [2024-07-15 10:11:43.899707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:68104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.186 [2024-07-15 10:11:43.899717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.186 [2024-07-15 10:11:43.899728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:68112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.186 [2024-07-15 10:11:43.899761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.186 [2024-07-15 10:11:43.899773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:68120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.186 [2024-07-15 10:11:43.899782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.186 [2024-07-15 10:11:43.899794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:68128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.186 [2024-07-15 10:11:43.899803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.186 [2024-07-15 10:11:43.899815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:68136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.186 [2024-07-15 10:11:43.899824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.186 [2024-07-15 10:11:43.899847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:68144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.186 [2024-07-15 10:11:43.899856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.186 [2024-07-15 10:11:43.899866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:68152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.186 [2024-07-15 10:11:43.899875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.186 [2024-07-15 10:11:43.899886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:68160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.186 [2024-07-15 10:11:43.899896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.186 [2024-07-15 10:11:43.899906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:68168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.186 [2024-07-15 10:11:43.899915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.186 [2024-07-15 10:11:43.899926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:68176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.186 [2024-07-15 10:11:43.899934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.186 [2024-07-15 10:11:43.899945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:68184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.186 [2024-07-15 10:11:43.899954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.186 [2024-07-15 10:11:43.899964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:68192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.186 [2024-07-15 10:11:43.899973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.186 [2024-07-15 10:11:43.899984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:68200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.186 [2024-07-15 10:11:43.899992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.186 [2024-07-15 10:11:43.900003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:68208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.186 [2024-07-15 10:11:43.900012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.186 [2024-07-15 10:11:43.900028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:68216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.186 [2024-07-15 10:11:43.900039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.186 [2024-07-15 10:11:43.900050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:68224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.186 [2024-07-15 10:11:43.900059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.186 [2024-07-15 10:11:43.900069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:68232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.186 [2024-07-15 10:11:43.900078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.186 [2024-07-15 10:11:43.900088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:68240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.186 [2024-07-15 10:11:43.900098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.186 [2024-07-15 10:11:43.900108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:68248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.186 [2024-07-15 10:11:43.900118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.186 [2024-07-15 10:11:43.900128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:68256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.186 [2024-07-15 10:11:43.900138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.186 [2024-07-15 10:11:43.900149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:68264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.186 [2024-07-15 10:11:43.900158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.186 [2024-07-15 10:11:43.900169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:68272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.186 [2024-07-15 10:11:43.900178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.186 [2024-07-15 10:11:43.900189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:68280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.186 [2024-07-15 10:11:43.900198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.186 [2024-07-15 10:11:43.900208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:68288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.186 [2024-07-15 10:11:43.900218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.186 [2024-07-15 10:11:43.900229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:67776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.186 [2024-07-15 10:11:43.900238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.186 [2024-07-15 10:11:43.900249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:67784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.186 [2024-07-15 10:11:43.900268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.186 [2024-07-15 10:11:43.900277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:67792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.186 [2024-07-15 10:11:43.900286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.186 [2024-07-15 10:11:43.900299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:67800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.186 [2024-07-15 10:11:43.900307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.186 [2024-07-15 10:11:43.900316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:67808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.186 [2024-07-15 10:11:43.900324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.186 [2024-07-15 10:11:43.900334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:67816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.186 [2024-07-15 10:11:43.900342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.186 [2024-07-15 10:11:43.900351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:67824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.187 [2024-07-15 10:11:43.900359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.187 [2024-07-15 10:11:43.900369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:67832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.187 [2024-07-15 10:11:43.900385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.187 [2024-07-15 10:11:43.900395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:67840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.187 [2024-07-15 10:11:43.900403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.187 [2024-07-15 10:11:43.900430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:67848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.187 [2024-07-15 10:11:43.900439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.187 [2024-07-15 10:11:43.900449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:68296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.187 [2024-07-15 10:11:43.900458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.187 [2024-07-15 10:11:43.900469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:68304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.187 [2024-07-15 10:11:43.900479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.187 [2024-07-15 10:11:43.900490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:68312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.187 [2024-07-15 10:11:43.900499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.187 [2024-07-15 10:11:43.900510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:68320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.187 [2024-07-15 10:11:43.900519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.187 [2024-07-15 10:11:43.900529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:68328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.187 [2024-07-15 10:11:43.900539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.187 [2024-07-15 10:11:43.900550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:68336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.187 [2024-07-15 10:11:43.900564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.187 [2024-07-15 10:11:43.900575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:68344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.187 [2024-07-15 10:11:43.900584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.187 [2024-07-15 10:11:43.900595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:68352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.187 [2024-07-15 10:11:43.900604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.187 [2024-07-15 10:11:43.900615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:68360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.187 [2024-07-15 10:11:43.900624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.187 [2024-07-15 10:11:43.900635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:68368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.187 [2024-07-15 10:11:43.900644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.187 [2024-07-15 10:11:43.900655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:68376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.187 [2024-07-15 10:11:43.900664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.187 [2024-07-15 10:11:43.900682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:68384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.187 [2024-07-15 10:11:43.900692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.187 [2024-07-15 10:11:43.900703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:68392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.187 [2024-07-15 10:11:43.900712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.187 [2024-07-15 10:11:43.900722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:68400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.187 [2024-07-15 10:11:43.900731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.187 [2024-07-15 10:11:43.900742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:68408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.187 [2024-07-15 10:11:43.900751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.187 [2024-07-15 10:11:43.900762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:68416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.187 [2024-07-15 10:11:43.900771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.187 [2024-07-15 10:11:43.900782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:68424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.187 [2024-07-15 10:11:43.900791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.187 [2024-07-15 10:11:43.900802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:68432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.187 [2024-07-15 10:11:43.900811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.187 [2024-07-15 10:11:43.900826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:68440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.187 [2024-07-15 10:11:43.900836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.187 [2024-07-15 10:11:43.900847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:68448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.187 [2024-07-15 10:11:43.900856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.187 [2024-07-15 10:11:43.900867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:68456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.187 [2024-07-15 10:11:43.900876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.187 [2024-07-15 10:11:43.900887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:68464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.187 [2024-07-15 10:11:43.900897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.187 [2024-07-15 10:11:43.900908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:68472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.187 [2024-07-15 10:11:43.900917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.187 [2024-07-15 10:11:43.900928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.187 [2024-07-15 10:11:43.900937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.187 [2024-07-15 10:11:43.900947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:68488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.187 [2024-07-15 10:11:43.900956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.187 [2024-07-15 10:11:43.900968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:68496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.187 [2024-07-15 10:11:43.900977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.187 [2024-07-15 10:11:43.900988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:68504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.187 [2024-07-15 10:11:43.900997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.187 [2024-07-15 10:11:43.901007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:68512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.187 [2024-07-15 10:11:43.901017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.187 [2024-07-15 10:11:43.901027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:68520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.187 [2024-07-15 10:11:43.901036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.187 [2024-07-15 10:11:43.901047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:68528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.187 [2024-07-15 10:11:43.901057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.187 [2024-07-15 10:11:43.901067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:68536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.187 [2024-07-15 10:11:43.901080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.187 [2024-07-15 10:11:43.901091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:68544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.187 [2024-07-15 10:11:43.901100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.187 [2024-07-15 10:11:43.901111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:68552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.187 [2024-07-15 10:11:43.901120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.187 [2024-07-15 10:11:43.901131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:68560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.187 [2024-07-15 10:11:43.901140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.187 [2024-07-15 10:11:43.901150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:68568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.187 [2024-07-15 10:11:43.901160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.187 [2024-07-15 10:11:43.901171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:68576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.187 [2024-07-15 10:11:43.901181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.187 [2024-07-15 10:11:43.901191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:68584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.187 [2024-07-15 10:11:43.901201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.187 [2024-07-15 10:11:43.901211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:68592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.187 [2024-07-15 10:11:43.901221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.187 [2024-07-15 10:11:43.901231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:68600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.187 [2024-07-15 10:11:43.901240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.187 [2024-07-15 10:11:43.901251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:68608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.188 [2024-07-15 10:11:43.901260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.188 [2024-07-15 10:11:43.901270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:68616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.188 [2024-07-15 10:11:43.901279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.188 [2024-07-15 10:11:43.901290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:68624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.188 [2024-07-15 10:11:43.901299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.188 [2024-07-15 10:11:43.901309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:68632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.188 [2024-07-15 10:11:43.901318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.188 [2024-07-15 10:11:43.901329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:68640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.188 [2024-07-15 10:11:43.901344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.188 [2024-07-15 10:11:43.901354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:68648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.188 [2024-07-15 10:11:43.901363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.188 [2024-07-15 10:11:43.901374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:68656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.188 [2024-07-15 10:11:43.901383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.188 [2024-07-15 10:11:43.901393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:68664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.188 [2024-07-15 10:11:43.901402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.188 [2024-07-15 10:11:43.901413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:68672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.188 [2024-07-15 10:11:43.901422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.188 [2024-07-15 10:11:43.901433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:68680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.188 [2024-07-15 10:11:43.901442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.188 [2024-07-15 10:11:43.901453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:68688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.188 [2024-07-15 10:11:43.901462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.188 [2024-07-15 10:11:43.901472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:68696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.188 [2024-07-15 10:11:43.901487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.188 [2024-07-15 10:11:43.901498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:68704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.188 [2024-07-15 10:11:43.901508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.188 [2024-07-15 10:11:43.901519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:68712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.188 [2024-07-15 10:11:43.901528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.188 [2024-07-15 10:11:43.901539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:68720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.188 [2024-07-15 10:11:43.901548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.188 [2024-07-15 10:11:43.901559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:68728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.188 [2024-07-15 10:11:43.901568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.188 [2024-07-15 10:11:43.901579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:68736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.188 [2024-07-15 10:11:43.901588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.188 [2024-07-15 10:11:43.901602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:68744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.188 [2024-07-15 10:11:43.901611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.188 [2024-07-15 10:11:43.901622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:68752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.188 [2024-07-15 10:11:43.901631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.188 [2024-07-15 10:11:43.901651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:68760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.188 [2024-07-15 10:11:43.901659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.188 [2024-07-15 10:11:43.901674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:68768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.188 [2024-07-15 10:11:43.901683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.188 [2024-07-15 10:11:43.901693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:68776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.188 [2024-07-15 10:11:43.901700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.188 [2024-07-15 10:11:43.901710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:68784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.188 [2024-07-15 10:11:43.901718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.188 [2024-07-15 10:11:43.901727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:67856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.188 [2024-07-15 10:11:43.901735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.188 [2024-07-15 10:11:43.901745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:67864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.188 [2024-07-15 10:11:43.901752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.188 [2024-07-15 10:11:43.901762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:67872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.188 [2024-07-15 10:11:43.901770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.188 [2024-07-15 10:11:43.901779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:67880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.188 [2024-07-15 10:11:43.901787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.188 [2024-07-15 10:11:43.901797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:67888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.188 [2024-07-15 10:11:43.901807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.188 [2024-07-15 10:11:43.901817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:67896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.188 [2024-07-15 10:11:43.901825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.188 [2024-07-15 10:11:43.901851] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:51.188 [2024-07-15 10:11:43.901863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67904 len:8 PRP1 0x0 PRP2 0x0 00:30:51.188 [2024-07-15 10:11:43.901871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.188 [2024-07-15 10:11:43.901882] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:51.188 [2024-07-15 10:11:43.901888] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:51.188 [2024-07-15 10:11:43.901895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67912 len:8 PRP1 0x0 PRP2 0x0 00:30:51.188 [2024-07-15 10:11:43.901903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.188 [2024-07-15 10:11:43.901911] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:51.188 [2024-07-15 10:11:43.901917] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:51.188 [2024-07-15 10:11:43.901923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67920 len:8 PRP1 0x0 PRP2 0x0 00:30:51.188 [2024-07-15 10:11:43.901931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.188 [2024-07-15 10:11:43.901939] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:51.188 [2024-07-15 10:11:43.901945] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:51.188 [2024-07-15 10:11:43.901951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67928 len:8 PRP1 0x0 PRP2 0x0 00:30:51.188 [2024-07-15 10:11:43.901959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.188 [2024-07-15 10:11:43.901967] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:51.188 [2024-07-15 10:11:43.901972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:51.188 [2024-07-15 10:11:43.901978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67936 len:8 PRP1 0x0 PRP2 0x0 00:30:51.188 [2024-07-15 10:11:43.901986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.188 [2024-07-15 10:11:43.901995] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:51.188 [2024-07-15 10:11:43.902000] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:51.188 [2024-07-15 10:11:43.902007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67944 len:8 PRP1 0x0 PRP2 0x0 00:30:51.188 [2024-07-15 10:11:43.902014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.188 [2024-07-15 10:11:43.902022] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:51.188 [2024-07-15 10:11:43.902028] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:51.188 [2024-07-15 10:11:43.902034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67952 len:8 PRP1 0x0 PRP2 0x0 00:30:51.188 [2024-07-15 10:11:43.902042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.188 [2024-07-15 10:11:43.902049] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:51.189 [2024-07-15 10:11:43.902055] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:51.189 [2024-07-15 10:11:43.902063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67960 len:8 PRP1 0x0 PRP2 0x0 00:30:51.189 [2024-07-15 10:11:43.902070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.189 [2024-07-15 10:11:43.902079] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:51.189 [2024-07-15 10:11:43.902088] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:51.189 [2024-07-15 10:11:43.902095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67968 len:8 PRP1 0x0 PRP2 0x0 00:30:51.189 [2024-07-15 10:11:43.902103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.189 [2024-07-15 10:11:43.902111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:51.189 [2024-07-15 10:11:43.902117] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:51.189 [2024-07-15 10:11:43.902123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67976 len:8 PRP1 0x0 PRP2 0x0 00:30:51.189 [2024-07-15 10:11:43.902130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.189 [2024-07-15 10:11:43.902138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:51.189 [2024-07-15 10:11:43.902144] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:51.189 [2024-07-15 10:11:43.902150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67984 len:8 PRP1 0x0 PRP2 0x0 00:30:51.189 [2024-07-15 10:11:43.902158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.189 [2024-07-15 10:11:43.902166] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:51.189 [2024-07-15 10:11:43.902172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:51.189 [2024-07-15 10:11:43.902178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67992 len:8 PRP1 0x0 PRP2 0x0 00:30:51.189 [2024-07-15 10:11:43.902186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.189 [2024-07-15 10:11:43.902194] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:51.189 [2024-07-15 10:11:43.902199] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:51.189 [2024-07-15 10:11:43.902205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:68000 len:8 PRP1 0x0 PRP2 0x0 00:30:51.189 [2024-07-15 10:11:43.902213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.189 [2024-07-15 10:11:43.902221] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:51.189 [2024-07-15 10:11:43.902243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:51.189 [2024-07-15 10:11:43.902250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:68008 len:8 PRP1 0x0 PRP2 0x0 00:30:51.189 [2024-07-15 10:11:43.902259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.189 [2024-07-15 10:11:43.902269] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:51.189 [2024-07-15 10:11:43.902275] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:51.189 [2024-07-15 10:11:43.902282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:68016 len:8 PRP1 0x0 PRP2 0x0 00:30:51.189 [2024-07-15 10:11:43.902290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.189 [2024-07-15 10:11:43.902299] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:51.189 [2024-07-15 10:11:43.902306] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:51.189 [2024-07-15 10:11:43.902314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:68024 len:8 PRP1 0x0 PRP2 0x0 00:30:51.189 [2024-07-15 10:11:43.902323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.189 [2024-07-15 10:11:43.902336] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:51.189 [2024-07-15 10:11:43.902342] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:51.189 [2024-07-15 10:11:43.916575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:68032 len:8 PRP1 0x0 PRP2 0x0 00:30:51.189 [2024-07-15 10:11:43.916624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.189 [2024-07-15 10:11:43.916716] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xfb6500 was disconnected and freed. reset controller. 00:30:51.189 [2024-07-15 10:11:43.916859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:51.189 [2024-07-15 10:11:43.916880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.189 [2024-07-15 10:11:43.916896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:51.189 [2024-07-15 10:11:43.916909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.189 [2024-07-15 10:11:43.916923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:51.189 [2024-07-15 10:11:43.916936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.189 [2024-07-15 10:11:43.916949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:51.189 [2024-07-15 10:11:43.916962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.189 [2024-07-15 10:11:43.916974] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11824d0 is same with the state(5) to be set 00:30:51.189 [2024-07-15 10:11:43.918802] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.189 [2024-07-15 10:11:43.918846] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11824d0 (9): Bad file descriptor 00:30:51.189 [2024-07-15 10:11:43.918973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.189 [2024-07-15 10:11:43.918996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11824d0 with addr=10.0.0.2, port=4421 00:30:51.189 [2024-07-15 10:11:43.919010] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11824d0 is same with the state(5) to be set 00:30:51.189 [2024-07-15 10:11:43.919031] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11824d0 (9): Bad file descriptor 00:30:51.189 [2024-07-15 10:11:43.919050] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.189 [2024-07-15 10:11:43.919063] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.189 [2024-07-15 10:11:43.919076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.189 [2024-07-15 10:11:43.919101] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.189 [2024-07-15 10:11:43.919113] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.189 [2024-07-15 10:11:53.951238] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:51.189 Received shutdown signal, test time was about 54.513222 seconds 00:30:51.189 00:30:51.189 Latency(us) 00:30:51.189 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:51.189 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:30:51.189 Verification LBA range: start 0x0 length 0x4000 00:30:51.189 Nvme0n1 : 54.51 9146.12 35.73 0.00 0.00 13972.19 958.71 7121158.93 00:30:51.189 =================================================================================================================== 00:30:51.189 Total : 9146.12 35.73 0.00 0.00 13972.19 958.71 7121158.93 00:30:51.189 10:12:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:51.189 10:12:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:30:51.189 10:12:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:30:51.189 10:12:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:30:51.189 10:12:04 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:51.189 10:12:04 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:30:51.189 10:12:04 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:51.189 10:12:04 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:30:51.189 10:12:04 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:51.189 10:12:04 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:51.189 rmmod nvme_tcp 00:30:51.189 rmmod nvme_fabrics 00:30:51.189 rmmod nvme_keyring 00:30:51.189 10:12:04 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:51.189 10:12:04 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:30:51.189 10:12:04 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:30:51.189 10:12:04 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 94664 ']' 00:30:51.189 10:12:04 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 94664 00:30:51.189 10:12:04 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 94664 ']' 00:30:51.189 10:12:04 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 94664 00:30:51.189 10:12:04 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:30:51.189 10:12:04 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:51.189 10:12:04 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94664 00:30:51.189 killing process with pid 94664 00:30:51.189 10:12:04 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:51.189 10:12:04 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:51.189 10:12:04 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94664' 00:30:51.189 10:12:04 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 94664 00:30:51.189 10:12:04 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 94664 00:30:51.189 10:12:04 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:51.189 10:12:04 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:51.189 10:12:04 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:51.190 10:12:04 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:51.190 10:12:04 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:51.190 10:12:04 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:51.190 10:12:04 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:51.190 10:12:04 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:51.190 10:12:04 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:30:51.190 00:30:51.190 real 0m59.874s 00:30:51.190 user 2m52.327s 00:30:51.190 sys 0m10.643s 00:30:51.190 10:12:04 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:51.190 10:12:04 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:51.451 ************************************ 00:30:51.451 END TEST nvmf_host_multipath 00:30:51.451 ************************************ 00:30:51.451 10:12:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:51.451 10:12:04 nvmf_tcp -- nvmf/nvmf.sh@118 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:30:51.451 10:12:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:51.451 10:12:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:51.451 10:12:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:51.451 ************************************ 00:30:51.451 START TEST nvmf_timeout 00:30:51.451 ************************************ 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:30:51.451 * Looking for test storage... 00:30:51.451 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:30:51.451 Cannot find device "nvmf_tgt_br" 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:30:51.451 Cannot find device "nvmf_tgt_br2" 00:30:51.451 10:12:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:30:51.452 10:12:04 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:30:51.452 10:12:05 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:30:51.452 Cannot find device "nvmf_tgt_br" 00:30:51.452 10:12:05 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:30:51.452 10:12:05 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:30:51.710 Cannot find device "nvmf_tgt_br2" 00:30:51.710 10:12:05 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:30:51.710 10:12:05 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:30:51.710 10:12:05 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:30:51.710 10:12:05 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:51.710 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:51.710 10:12:05 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:30:51.710 10:12:05 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:51.710 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:51.710 10:12:05 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:30:51.710 10:12:05 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:30:51.710 10:12:05 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:51.710 10:12:05 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:51.710 10:12:05 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:51.710 10:12:05 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:51.710 10:12:05 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:51.710 10:12:05 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:51.710 10:12:05 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:30:51.710 10:12:05 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:30:51.710 10:12:05 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:30:51.710 10:12:05 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:30:51.710 10:12:05 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:30:51.710 10:12:05 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:30:51.710 10:12:05 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:51.710 10:12:05 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:51.710 10:12:05 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:51.710 10:12:05 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:30:51.710 10:12:05 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:30:51.710 10:12:05 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:30:51.710 10:12:05 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:51.710 10:12:05 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:51.710 10:12:05 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:51.710 10:12:05 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:51.710 10:12:05 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:30:51.710 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:51.710 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:30:51.710 00:30:51.710 --- 10.0.0.2 ping statistics --- 00:30:51.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:51.710 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:30:51.710 10:12:05 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:30:51.710 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:51.710 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:30:51.710 00:30:51.710 --- 10.0.0.3 ping statistics --- 00:30:51.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:51.710 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:30:51.710 10:12:05 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:51.969 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:51.969 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:30:51.969 00:30:51.969 --- 10.0.0.1 ping statistics --- 00:30:51.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:51.969 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:30:51.969 10:12:05 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:51.969 10:12:05 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:30:51.969 10:12:05 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:51.969 10:12:05 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:51.969 10:12:05 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:51.969 10:12:05 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:51.969 10:12:05 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:51.969 10:12:05 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:51.969 10:12:05 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:51.969 10:12:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:30:51.969 10:12:05 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:51.969 10:12:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:51.969 10:12:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:51.969 10:12:05 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=96020 00:30:51.969 10:12:05 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:30:51.969 10:12:05 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 96020 00:30:51.969 10:12:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 96020 ']' 00:30:51.969 10:12:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:51.969 10:12:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:51.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:51.969 10:12:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:51.969 10:12:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:51.969 10:12:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:51.969 [2024-07-15 10:12:05.382140] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:30:51.969 [2024-07-15 10:12:05.382220] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:51.969 [2024-07-15 10:12:05.506613] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:52.239 [2024-07-15 10:12:05.616037] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:52.239 [2024-07-15 10:12:05.616089] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:52.239 [2024-07-15 10:12:05.616095] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:52.239 [2024-07-15 10:12:05.616100] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:52.239 [2024-07-15 10:12:05.616104] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:52.239 [2024-07-15 10:12:05.616310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:52.239 [2024-07-15 10:12:05.616311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:52.828 10:12:06 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:52.828 10:12:06 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:30:52.828 10:12:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:52.828 10:12:06 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:52.828 10:12:06 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:52.828 10:12:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:52.828 10:12:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:52.828 10:12:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:53.087 [2024-07-15 10:12:06.544790] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:53.087 10:12:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:53.347 Malloc0 00:30:53.347 10:12:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:53.606 10:12:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:53.866 10:12:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:53.866 [2024-07-15 10:12:07.438741] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:54.125 10:12:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=96111 00:30:54.125 10:12:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:30:54.125 10:12:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 96111 /var/tmp/bdevperf.sock 00:30:54.125 10:12:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 96111 ']' 00:30:54.125 10:12:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:54.125 10:12:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:54.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:54.125 10:12:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:54.125 10:12:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:54.125 10:12:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:54.125 [2024-07-15 10:12:07.514642] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:30:54.125 [2024-07-15 10:12:07.514723] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96111 ] 00:30:54.125 [2024-07-15 10:12:07.644255] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:54.384 [2024-07-15 10:12:07.750860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:54.953 10:12:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:54.953 10:12:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:30:54.953 10:12:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:30:55.212 10:12:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:30:55.472 NVMe0n1 00:30:55.473 10:12:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=96159 00:30:55.473 10:12:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:55.473 10:12:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:30:55.473 Running I/O for 10 seconds... 00:30:56.412 10:12:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:56.675 [2024-07-15 10:12:10.063849] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d900 is same with the state(5) to be set 00:30:56.675 [2024-07-15 10:12:10.063902] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d900 is same with the state(5) to be set 00:30:56.675 [2024-07-15 10:12:10.063910] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d900 is same with the state(5) to be set 00:30:56.675 [2024-07-15 10:12:10.063916] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d900 is same with the state(5) to be set 00:30:56.675 [2024-07-15 10:12:10.063922] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d900 is same with the state(5) to be set 00:30:56.675 [2024-07-15 10:12:10.063928] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d900 is same with the state(5) to be set 00:30:56.675 [2024-07-15 10:12:10.063933] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d900 is same with the state(5) to be set 00:30:56.675 [2024-07-15 10:12:10.063939] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d900 is same with the state(5) to be set 00:30:56.675 [2024-07-15 10:12:10.063945] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d900 is same with the state(5) to be set 00:30:56.675 [2024-07-15 10:12:10.063951] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d900 is same with the state(5) to be set 00:30:56.675 [2024-07-15 10:12:10.063957] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c8d900 is same with the state(5) to be set 00:30:56.675 [2024-07-15 10:12:10.066580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.675 [2024-07-15 10:12:10.066625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.675 [2024-07-15 10:12:10.066643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:97320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.675 [2024-07-15 10:12:10.066650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.675 [2024-07-15 10:12:10.066680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:97328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.675 [2024-07-15 10:12:10.066688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.675 [2024-07-15 10:12:10.066701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:97336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.675 [2024-07-15 10:12:10.066708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.675 [2024-07-15 10:12:10.066720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:97344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.675 [2024-07-15 10:12:10.066727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.675 [2024-07-15 10:12:10.066752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.675 [2024-07-15 10:12:10.066776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.675 [2024-07-15 10:12:10.066800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:97360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.675 [2024-07-15 10:12:10.066811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.675 [2024-07-15 10:12:10.066829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:97368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.675 [2024-07-15 10:12:10.066836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.675 [2024-07-15 10:12:10.066856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:97376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.675 [2024-07-15 10:12:10.066864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.675 [2024-07-15 10:12:10.066882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:97384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.675 [2024-07-15 10:12:10.066906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.675 [2024-07-15 10:12:10.066915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:97392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.675 [2024-07-15 10:12:10.066922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.675 [2024-07-15 10:12:10.066931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:97400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.675 [2024-07-15 10:12:10.066944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.675 [2024-07-15 10:12:10.066953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:97272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.675 [2024-07-15 10:12:10.066959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.675 [2024-07-15 10:12:10.066967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:97408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.675 [2024-07-15 10:12:10.066974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.675 [2024-07-15 10:12:10.066982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:97416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.675 [2024-07-15 10:12:10.066989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.675 [2024-07-15 10:12:10.066997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:97424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.675 [2024-07-15 10:12:10.067003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.675 [2024-07-15 10:12:10.067011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:97432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.675 [2024-07-15 10:12:10.067018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.675 [2024-07-15 10:12:10.067026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.675 [2024-07-15 10:12:10.067033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.675 [2024-07-15 10:12:10.067041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.675 [2024-07-15 10:12:10.067047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.675 [2024-07-15 10:12:10.067056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:97456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.675 [2024-07-15 10:12:10.067062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.675 [2024-07-15 10:12:10.067070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.675 [2024-07-15 10:12:10.067076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.675 [2024-07-15 10:12:10.067084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:97472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.675 [2024-07-15 10:12:10.067091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.675 [2024-07-15 10:12:10.067100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:97480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.675 [2024-07-15 10:12:10.067106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.675 [2024-07-15 10:12:10.067115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.675 [2024-07-15 10:12:10.067121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.675 [2024-07-15 10:12:10.067129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:97496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.675 [2024-07-15 10:12:10.067135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.675 [2024-07-15 10:12:10.067143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:97504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.675 [2024-07-15 10:12:10.067149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.675 [2024-07-15 10:12:10.067157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:97512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.675 [2024-07-15 10:12:10.067163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.675 [2024-07-15 10:12:10.067172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.675 [2024-07-15 10:12:10.067177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.675 [2024-07-15 10:12:10.067186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:97528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.675 [2024-07-15 10:12:10.067192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.675 [2024-07-15 10:12:10.067200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:97536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.675 [2024-07-15 10:12:10.067206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.675 [2024-07-15 10:12:10.067214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:97544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.675 [2024-07-15 10:12:10.067221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.675 [2024-07-15 10:12:10.067229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:97552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.675 [2024-07-15 10:12:10.067235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.675 [2024-07-15 10:12:10.067243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:97560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.675 [2024-07-15 10:12:10.067249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.675 [2024-07-15 10:12:10.067258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:97568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.676 [2024-07-15 10:12:10.067266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.676 [2024-07-15 10:12:10.067274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:97576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.676 [2024-07-15 10:12:10.067281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.676 [2024-07-15 10:12:10.067289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:97584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.676 [2024-07-15 10:12:10.067296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.676 [2024-07-15 10:12:10.067304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:97592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.676 [2024-07-15 10:12:10.067311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.676 [2024-07-15 10:12:10.067319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:97600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.676 [2024-07-15 10:12:10.067325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.676 [2024-07-15 10:12:10.067333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.676 [2024-07-15 10:12:10.067339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.676 [2024-07-15 10:12:10.067348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.676 [2024-07-15 10:12:10.067354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.676 [2024-07-15 10:12:10.067362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:97624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.676 [2024-07-15 10:12:10.067368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.676 [2024-07-15 10:12:10.067376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:97632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.676 [2024-07-15 10:12:10.067382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.676 [2024-07-15 10:12:10.067390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:97640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.676 [2024-07-15 10:12:10.067397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.676 [2024-07-15 10:12:10.067404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:97648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.676 [2024-07-15 10:12:10.067411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.676 [2024-07-15 10:12:10.067418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:97656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.676 [2024-07-15 10:12:10.067425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.676 [2024-07-15 10:12:10.067433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.676 [2024-07-15 10:12:10.067439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.676 [2024-07-15 10:12:10.067448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.676 [2024-07-15 10:12:10.067454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.676 [2024-07-15 10:12:10.067462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.676 [2024-07-15 10:12:10.067468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.676 [2024-07-15 10:12:10.067477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:97688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.676 [2024-07-15 10:12:10.067484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.676 [2024-07-15 10:12:10.067492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:97696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.676 [2024-07-15 10:12:10.067499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.676 [2024-07-15 10:12:10.067517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:97704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.676 [2024-07-15 10:12:10.067523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.676 [2024-07-15 10:12:10.067531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:97712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.676 [2024-07-15 10:12:10.067537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.676 [2024-07-15 10:12:10.067546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:97720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.676 [2024-07-15 10:12:10.067552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.676 [2024-07-15 10:12:10.067560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:97728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.676 [2024-07-15 10:12:10.067566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.676 [2024-07-15 10:12:10.067574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:97736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.676 [2024-07-15 10:12:10.067580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.676 [2024-07-15 10:12:10.067588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.676 [2024-07-15 10:12:10.067594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.676 [2024-07-15 10:12:10.067602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:97752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.676 [2024-07-15 10:12:10.067608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.676 [2024-07-15 10:12:10.067616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:97760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.676 [2024-07-15 10:12:10.067622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.676 [2024-07-15 10:12:10.067630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:97768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.676 [2024-07-15 10:12:10.067637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.676 [2024-07-15 10:12:10.067645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:97776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.676 [2024-07-15 10:12:10.067650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.676 [2024-07-15 10:12:10.067673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:97784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.676 [2024-07-15 10:12:10.067680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.676 [2024-07-15 10:12:10.067706] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.676 [2024-07-15 10:12:10.067713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97792 len:8 PRP1 0x0 PRP2 0x0 00:30:56.676 [2024-07-15 10:12:10.067720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.676 [2024-07-15 10:12:10.067729] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.676 [2024-07-15 10:12:10.067734] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.676 [2024-07-15 10:12:10.067740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97800 len:8 PRP1 0x0 PRP2 0x0 00:30:56.676 [2024-07-15 10:12:10.067746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.676 [2024-07-15 10:12:10.067753] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.676 [2024-07-15 10:12:10.067758] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.676 [2024-07-15 10:12:10.067764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97808 len:8 PRP1 0x0 PRP2 0x0 00:30:56.676 [2024-07-15 10:12:10.067770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.676 [2024-07-15 10:12:10.067777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.676 [2024-07-15 10:12:10.067782] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.676 [2024-07-15 10:12:10.067801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97816 len:8 PRP1 0x0 PRP2 0x0 00:30:56.676 [2024-07-15 10:12:10.067807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.676 [2024-07-15 10:12:10.067814] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.676 [2024-07-15 10:12:10.067819] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.676 [2024-07-15 10:12:10.067825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97824 len:8 PRP1 0x0 PRP2 0x0 00:30:56.676 [2024-07-15 10:12:10.067831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.676 [2024-07-15 10:12:10.067837] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.676 [2024-07-15 10:12:10.067842] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.676 [2024-07-15 10:12:10.067855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97832 len:8 PRP1 0x0 PRP2 0x0 00:30:56.676 [2024-07-15 10:12:10.067861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.676 [2024-07-15 10:12:10.067868] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.676 [2024-07-15 10:12:10.067873] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.676 [2024-07-15 10:12:10.067879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97840 len:8 PRP1 0x0 PRP2 0x0 00:30:56.676 [2024-07-15 10:12:10.067885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.676 [2024-07-15 10:12:10.067891] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.676 [2024-07-15 10:12:10.067896] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.676 [2024-07-15 10:12:10.067901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97848 len:8 PRP1 0x0 PRP2 0x0 00:30:56.676 [2024-07-15 10:12:10.067914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.676 [2024-07-15 10:12:10.067926] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.676 [2024-07-15 10:12:10.067931] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.676 [2024-07-15 10:12:10.067936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97856 len:8 PRP1 0x0 PRP2 0x0 00:30:56.677 [2024-07-15 10:12:10.067943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.677 [2024-07-15 10:12:10.067949] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.677 [2024-07-15 10:12:10.067955] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.677 [2024-07-15 10:12:10.067960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97864 len:8 PRP1 0x0 PRP2 0x0 00:30:56.677 [2024-07-15 10:12:10.067974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.677 [2024-07-15 10:12:10.067981] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.677 [2024-07-15 10:12:10.067986] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.677 [2024-07-15 10:12:10.067992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97872 len:8 PRP1 0x0 PRP2 0x0 00:30:56.677 [2024-07-15 10:12:10.067998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.677 [2024-07-15 10:12:10.068005] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.677 [2024-07-15 10:12:10.068010] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.677 [2024-07-15 10:12:10.068015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97880 len:8 PRP1 0x0 PRP2 0x0 00:30:56.677 [2024-07-15 10:12:10.068027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.677 [2024-07-15 10:12:10.068034] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.677 [2024-07-15 10:12:10.068039] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.677 [2024-07-15 10:12:10.068045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97888 len:8 PRP1 0x0 PRP2 0x0 00:30:56.677 [2024-07-15 10:12:10.068051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.677 [2024-07-15 10:12:10.068058] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.677 [2024-07-15 10:12:10.068063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.677 [2024-07-15 10:12:10.068068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97896 len:8 PRP1 0x0 PRP2 0x0 00:30:56.677 [2024-07-15 10:12:10.068074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.677 [2024-07-15 10:12:10.068081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.677 [2024-07-15 10:12:10.068092] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.677 [2024-07-15 10:12:10.068097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97904 len:8 PRP1 0x0 PRP2 0x0 00:30:56.677 [2024-07-15 10:12:10.068104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.677 [2024-07-15 10:12:10.068110] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.677 [2024-07-15 10:12:10.068116] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.677 [2024-07-15 10:12:10.068128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97912 len:8 PRP1 0x0 PRP2 0x0 00:30:56.677 [2024-07-15 10:12:10.068134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.677 [2024-07-15 10:12:10.068141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.677 [2024-07-15 10:12:10.068146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.677 [2024-07-15 10:12:10.068152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97920 len:8 PRP1 0x0 PRP2 0x0 00:30:56.677 [2024-07-15 10:12:10.068158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.677 [2024-07-15 10:12:10.068165] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.677 [2024-07-15 10:12:10.068170] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.677 [2024-07-15 10:12:10.068175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97928 len:8 PRP1 0x0 PRP2 0x0 00:30:56.677 [2024-07-15 10:12:10.068188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.677 [2024-07-15 10:12:10.068195] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.677 [2024-07-15 10:12:10.068201] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.677 [2024-07-15 10:12:10.068206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97936 len:8 PRP1 0x0 PRP2 0x0 00:30:56.677 [2024-07-15 10:12:10.068212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.677 [2024-07-15 10:12:10.068225] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.677 [2024-07-15 10:12:10.068231] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.677 [2024-07-15 10:12:10.068236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97944 len:8 PRP1 0x0 PRP2 0x0 00:30:56.677 [2024-07-15 10:12:10.068242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.677 [2024-07-15 10:12:10.068249] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.677 [2024-07-15 10:12:10.068254] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.677 [2024-07-15 10:12:10.068260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97952 len:8 PRP1 0x0 PRP2 0x0 00:30:56.677 [2024-07-15 10:12:10.068266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.677 [2024-07-15 10:12:10.068273] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.677 [2024-07-15 10:12:10.068284] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.677 [2024-07-15 10:12:10.068290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97960 len:8 PRP1 0x0 PRP2 0x0 00:30:56.677 [2024-07-15 10:12:10.068296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.677 [2024-07-15 10:12:10.068302] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.677 [2024-07-15 10:12:10.068310] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.677 [2024-07-15 10:12:10.068315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97968 len:8 PRP1 0x0 PRP2 0x0 00:30:56.677 [2024-07-15 10:12:10.068326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.677 [2024-07-15 10:12:10.068334] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.677 [2024-07-15 10:12:10.068339] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.677 [2024-07-15 10:12:10.068344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97976 len:8 PRP1 0x0 PRP2 0x0 00:30:56.677 [2024-07-15 10:12:10.068351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.677 [2024-07-15 10:12:10.068357] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.677 [2024-07-15 10:12:10.068362] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.677 [2024-07-15 10:12:10.068368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97984 len:8 PRP1 0x0 PRP2 0x0 00:30:56.677 [2024-07-15 10:12:10.068389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.677 [2024-07-15 10:12:10.068396] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.677 [2024-07-15 10:12:10.068401] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.677 [2024-07-15 10:12:10.068406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97992 len:8 PRP1 0x0 PRP2 0x0 00:30:56.677 [2024-07-15 10:12:10.068413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.677 [2024-07-15 10:12:10.068419] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.677 [2024-07-15 10:12:10.068424] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.677 [2024-07-15 10:12:10.068429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98000 len:8 PRP1 0x0 PRP2 0x0 00:30:56.677 [2024-07-15 10:12:10.068435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.677 [2024-07-15 10:12:10.068441] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.677 [2024-07-15 10:12:10.068450] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.677 [2024-07-15 10:12:10.068472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98008 len:8 PRP1 0x0 PRP2 0x0 00:30:56.677 [2024-07-15 10:12:10.068478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.677 [2024-07-15 10:12:10.068485] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.677 [2024-07-15 10:12:10.068490] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.677 [2024-07-15 10:12:10.068496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98016 len:8 PRP1 0x0 PRP2 0x0 00:30:56.677 [2024-07-15 10:12:10.068502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.677 [2024-07-15 10:12:10.068509] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.677 [2024-07-15 10:12:10.068513] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.677 [2024-07-15 10:12:10.068519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98024 len:8 PRP1 0x0 PRP2 0x0 00:30:56.677 [2024-07-15 10:12:10.068530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.677 [2024-07-15 10:12:10.068537] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.677 [2024-07-15 10:12:10.068542] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.677 [2024-07-15 10:12:10.068549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98032 len:8 PRP1 0x0 PRP2 0x0 00:30:56.677 [2024-07-15 10:12:10.068555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.677 [2024-07-15 10:12:10.068561] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.677 [2024-07-15 10:12:10.068566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.677 [2024-07-15 10:12:10.068572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98040 len:8 PRP1 0x0 PRP2 0x0 00:30:56.677 [2024-07-15 10:12:10.068577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.677 [2024-07-15 10:12:10.068584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.677 [2024-07-15 10:12:10.068589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.677 [2024-07-15 10:12:10.068599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98048 len:8 PRP1 0x0 PRP2 0x0 00:30:56.677 [2024-07-15 10:12:10.068606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.677 [2024-07-15 10:12:10.068612] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.677 [2024-07-15 10:12:10.068617] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.677 [2024-07-15 10:12:10.068623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98056 len:8 PRP1 0x0 PRP2 0x0 00:30:56.678 [2024-07-15 10:12:10.068629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.678 [2024-07-15 10:12:10.068636] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.678 [2024-07-15 10:12:10.068640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.678 [2024-07-15 10:12:10.068645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98064 len:8 PRP1 0x0 PRP2 0x0 00:30:56.678 [2024-07-15 10:12:10.068651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.678 [2024-07-15 10:12:10.068676] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.678 [2024-07-15 10:12:10.068683] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.678 [2024-07-15 10:12:10.068689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98072 len:8 PRP1 0x0 PRP2 0x0 00:30:56.678 [2024-07-15 10:12:10.068695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.678 [2024-07-15 10:12:10.068702] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.678 [2024-07-15 10:12:10.068707] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.678 [2024-07-15 10:12:10.068713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98080 len:8 PRP1 0x0 PRP2 0x0 00:30:56.678 [2024-07-15 10:12:10.068719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.678 [2024-07-15 10:12:10.068725] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.678 [2024-07-15 10:12:10.068730] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.678 [2024-07-15 10:12:10.068735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98088 len:8 PRP1 0x0 PRP2 0x0 00:30:56.678 [2024-07-15 10:12:10.068741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.678 [2024-07-15 10:12:10.068753] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.678 [2024-07-15 10:12:10.068759] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.678 [2024-07-15 10:12:10.068764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98096 len:8 PRP1 0x0 PRP2 0x0 00:30:56.678 [2024-07-15 10:12:10.068770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.678 [2024-07-15 10:12:10.068777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.678 [2024-07-15 10:12:10.068782] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.678 [2024-07-15 10:12:10.068787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98104 len:8 PRP1 0x0 PRP2 0x0 00:30:56.678 [2024-07-15 10:12:10.068793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.678 [2024-07-15 10:12:10.068805] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.678 [2024-07-15 10:12:10.068811] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.678 [2024-07-15 10:12:10.068816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98112 len:8 PRP1 0x0 PRP2 0x0 00:30:56.678 [2024-07-15 10:12:10.068822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.678 [2024-07-15 10:12:10.068829] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.678 [2024-07-15 10:12:10.068834] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.678 [2024-07-15 10:12:10.068839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98120 len:8 PRP1 0x0 PRP2 0x0 00:30:56.678 [2024-07-15 10:12:10.068845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.678 [2024-07-15 10:12:10.068857] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.678 [2024-07-15 10:12:10.068863] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.678 [2024-07-15 10:12:10.068868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98128 len:8 PRP1 0x0 PRP2 0x0 00:30:56.678 [2024-07-15 10:12:10.068874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.678 [2024-07-15 10:12:10.068881] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.678 [2024-07-15 10:12:10.068887] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.678 [2024-07-15 10:12:10.068893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98136 len:8 PRP1 0x0 PRP2 0x0 00:30:56.678 [2024-07-15 10:12:10.089214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.678 [2024-07-15 10:12:10.089283] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.678 [2024-07-15 10:12:10.089292] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.678 [2024-07-15 10:12:10.089302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98144 len:8 PRP1 0x0 PRP2 0x0 00:30:56.678 [2024-07-15 10:12:10.089314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.678 [2024-07-15 10:12:10.089323] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.678 [2024-07-15 10:12:10.089329] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.678 [2024-07-15 10:12:10.089336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98152 len:8 PRP1 0x0 PRP2 0x0 00:30:56.678 [2024-07-15 10:12:10.089344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.678 [2024-07-15 10:12:10.089353] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.678 [2024-07-15 10:12:10.089359] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.678 [2024-07-15 10:12:10.089366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98160 len:8 PRP1 0x0 PRP2 0x0 00:30:56.678 [2024-07-15 10:12:10.089374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.678 [2024-07-15 10:12:10.089382] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.678 [2024-07-15 10:12:10.089388] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.678 [2024-07-15 10:12:10.089395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98168 len:8 PRP1 0x0 PRP2 0x0 00:30:56.678 [2024-07-15 10:12:10.089403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.678 [2024-07-15 10:12:10.089412] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.678 [2024-07-15 10:12:10.089418] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.678 [2024-07-15 10:12:10.089424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98176 len:8 PRP1 0x0 PRP2 0x0 00:30:56.678 [2024-07-15 10:12:10.089432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.678 [2024-07-15 10:12:10.089440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.678 [2024-07-15 10:12:10.089446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.678 [2024-07-15 10:12:10.089453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98184 len:8 PRP1 0x0 PRP2 0x0 00:30:56.678 [2024-07-15 10:12:10.089462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.678 [2024-07-15 10:12:10.089470] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.678 [2024-07-15 10:12:10.089476] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.678 [2024-07-15 10:12:10.089483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98192 len:8 PRP1 0x0 PRP2 0x0 00:30:56.678 [2024-07-15 10:12:10.089490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.678 [2024-07-15 10:12:10.089498] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.678 [2024-07-15 10:12:10.089505] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.678 [2024-07-15 10:12:10.089513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98200 len:8 PRP1 0x0 PRP2 0x0 00:30:56.678 [2024-07-15 10:12:10.089521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.678 [2024-07-15 10:12:10.089529] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.678 [2024-07-15 10:12:10.089535] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.678 [2024-07-15 10:12:10.089542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98208 len:8 PRP1 0x0 PRP2 0x0 00:30:56.678 [2024-07-15 10:12:10.089586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.678 [2024-07-15 10:12:10.089595] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.678 [2024-07-15 10:12:10.089601] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.678 [2024-07-15 10:12:10.089607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98216 len:8 PRP1 0x0 PRP2 0x0 00:30:56.678 [2024-07-15 10:12:10.089615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.678 [2024-07-15 10:12:10.089623] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.678 [2024-07-15 10:12:10.089630] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.678 [2024-07-15 10:12:10.089637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98224 len:8 PRP1 0x0 PRP2 0x0 00:30:56.678 [2024-07-15 10:12:10.089644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.678 [2024-07-15 10:12:10.089653] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.678 [2024-07-15 10:12:10.089685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.678 [2024-07-15 10:12:10.089693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98232 len:8 PRP1 0x0 PRP2 0x0 00:30:56.678 [2024-07-15 10:12:10.089701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.678 [2024-07-15 10:12:10.089709] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.678 [2024-07-15 10:12:10.089715] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.678 [2024-07-15 10:12:10.089723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98240 len:8 PRP1 0x0 PRP2 0x0 00:30:56.678 [2024-07-15 10:12:10.089739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.678 [2024-07-15 10:12:10.089748] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.678 [2024-07-15 10:12:10.089755] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.678 [2024-07-15 10:12:10.089761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98248 len:8 PRP1 0x0 PRP2 0x0 00:30:56.678 [2024-07-15 10:12:10.089769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.678 [2024-07-15 10:12:10.089777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.678 [2024-07-15 10:12:10.089793] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.679 [2024-07-15 10:12:10.089800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98256 len:8 PRP1 0x0 PRP2 0x0 00:30:56.679 [2024-07-15 10:12:10.089813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.679 [2024-07-15 10:12:10.089822] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.679 [2024-07-15 10:12:10.089829] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.679 [2024-07-15 10:12:10.089835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98264 len:8 PRP1 0x0 PRP2 0x0 00:30:56.679 [2024-07-15 10:12:10.089843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.679 [2024-07-15 10:12:10.089860] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.679 [2024-07-15 10:12:10.089866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.679 [2024-07-15 10:12:10.089873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98272 len:8 PRP1 0x0 PRP2 0x0 00:30:56.679 [2024-07-15 10:12:10.089881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.679 [2024-07-15 10:12:10.089890] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.679 [2024-07-15 10:12:10.089896] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.679 [2024-07-15 10:12:10.089903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98280 len:8 PRP1 0x0 PRP2 0x0 00:30:56.679 [2024-07-15 10:12:10.089911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.679 [2024-07-15 10:12:10.089920] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.679 [2024-07-15 10:12:10.089932] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.679 [2024-07-15 10:12:10.089940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98288 len:8 PRP1 0x0 PRP2 0x0 00:30:56.679 [2024-07-15 10:12:10.089947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.679 [2024-07-15 10:12:10.089956] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.679 [2024-07-15 10:12:10.089970] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.679 [2024-07-15 10:12:10.089977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97280 len:8 PRP1 0x0 PRP2 0x0 00:30:56.679 [2024-07-15 10:12:10.089984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.679 [2024-07-15 10:12:10.089993] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.679 [2024-07-15 10:12:10.090000] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.679 [2024-07-15 10:12:10.090007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97288 len:8 PRP1 0x0 PRP2 0x0 00:30:56.679 [2024-07-15 10:12:10.090015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.679 [2024-07-15 10:12:10.090031] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.679 [2024-07-15 10:12:10.090037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.679 [2024-07-15 10:12:10.090044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97296 len:8 PRP1 0x0 PRP2 0x0 00:30:56.679 [2024-07-15 10:12:10.090057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.679 [2024-07-15 10:12:10.090066] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.679 [2024-07-15 10:12:10.090072] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.679 [2024-07-15 10:12:10.090079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97304 len:8 PRP1 0x0 PRP2 0x0 00:30:56.679 [2024-07-15 10:12:10.090094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.679 [2024-07-15 10:12:10.090176] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c3d8d0 was disconnected and freed. reset controller. 00:30:56.679 [2024-07-15 10:12:10.090342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:56.679 [2024-07-15 10:12:10.090367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.679 [2024-07-15 10:12:10.090380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:56.679 [2024-07-15 10:12:10.090388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.679 [2024-07-15 10:12:10.090397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:56.679 [2024-07-15 10:12:10.090406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.679 [2024-07-15 10:12:10.090415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:56.679 [2024-07-15 10:12:10.090424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.679 [2024-07-15 10:12:10.090433] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd0240 is same with the state(5) to be set 00:30:56.679 [2024-07-15 10:12:10.090752] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.679 [2024-07-15 10:12:10.090781] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bd0240 (9): Bad file descriptor 00:30:56.679 [2024-07-15 10:12:10.090884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.679 [2024-07-15 10:12:10.090905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bd0240 with addr=10.0.0.2, port=4420 00:30:56.679 [2024-07-15 10:12:10.090915] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd0240 is same with the state(5) to be set 00:30:56.679 [2024-07-15 10:12:10.090931] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bd0240 (9): Bad file descriptor 00:30:56.679 [2024-07-15 10:12:10.090945] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.679 [2024-07-15 10:12:10.090952] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.679 [2024-07-15 10:12:10.090961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.679 [2024-07-15 10:12:10.090980] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.679 [2024-07-15 10:12:10.090988] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.679 10:12:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:30:58.588 [2024-07-15 10:12:12.087394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.588 [2024-07-15 10:12:12.087464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bd0240 with addr=10.0.0.2, port=4420 00:30:58.588 [2024-07-15 10:12:12.087475] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd0240 is same with the state(5) to be set 00:30:58.588 [2024-07-15 10:12:12.087495] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bd0240 (9): Bad file descriptor 00:30:58.588 [2024-07-15 10:12:12.087507] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.588 [2024-07-15 10:12:12.087513] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.588 [2024-07-15 10:12:12.087521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.588 [2024-07-15 10:12:12.087541] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.588 [2024-07-15 10:12:12.087548] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.588 10:12:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:30:58.588 10:12:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:58.588 10:12:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:30:58.847 10:12:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:30:58.847 10:12:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:30:58.847 10:12:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:30:58.847 10:12:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:30:59.111 10:12:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:30:59.111 10:12:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:31:00.501 [2024-07-15 10:12:14.083953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:00.501 [2024-07-15 10:12:14.084016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bd0240 with addr=10.0.0.2, port=4420 00:31:00.501 [2024-07-15 10:12:14.084028] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd0240 is same with the state(5) to be set 00:31:00.501 [2024-07-15 10:12:14.084049] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bd0240 (9): Bad file descriptor 00:31:00.501 [2024-07-15 10:12:14.084062] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:00.501 [2024-07-15 10:12:14.084068] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:00.501 [2024-07-15 10:12:14.084076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:00.501 [2024-07-15 10:12:14.084099] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:00.501 [2024-07-15 10:12:14.084106] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:03.035 [2024-07-15 10:12:16.080405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:03.035 [2024-07-15 10:12:16.080463] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:03.035 [2024-07-15 10:12:16.080470] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:03.035 [2024-07-15 10:12:16.080477] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:31:03.035 [2024-07-15 10:12:16.080515] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:03.603 00:31:03.603 Latency(us) 00:31:03.603 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:03.603 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:31:03.603 Verification LBA range: start 0x0 length 0x4000 00:31:03.603 NVMe0n1 : 8.14 1493.52 5.83 15.72 0.00 84673.45 1545.39 7033243.39 00:31:03.603 =================================================================================================================== 00:31:03.603 Total : 1493.52 5.83 15.72 0.00 84673.45 1545.39 7033243.39 00:31:03.603 0 00:31:04.174 10:12:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:31:04.174 10:12:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:04.174 10:12:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:31:04.433 10:12:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:31:04.433 10:12:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:31:04.433 10:12:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:31:04.433 10:12:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:31:04.433 10:12:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:31:04.433 10:12:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@65 -- # wait 96159 00:31:04.433 10:12:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 96111 00:31:04.433 10:12:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 96111 ']' 00:31:04.433 10:12:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 96111 00:31:04.433 10:12:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:31:04.433 10:12:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:04.433 10:12:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96111 00:31:04.693 10:12:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:31:04.693 10:12:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:31:04.693 10:12:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96111' 00:31:04.693 killing process with pid 96111 00:31:04.693 10:12:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 96111 00:31:04.693 Received shutdown signal, test time was about 9.094294 seconds 00:31:04.693 00:31:04.693 Latency(us) 00:31:04.693 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:04.693 =================================================================================================================== 00:31:04.693 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:04.693 10:12:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 96111 00:31:04.693 10:12:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:04.953 [2024-07-15 10:12:18.369389] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:04.953 10:12:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=96314 00:31:04.953 10:12:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:31:04.953 10:12:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 96314 /var/tmp/bdevperf.sock 00:31:04.953 10:12:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 96314 ']' 00:31:04.953 10:12:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:04.953 10:12:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:04.953 10:12:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:04.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:04.953 10:12:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:04.953 10:12:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:31:04.953 [2024-07-15 10:12:18.443201] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:31:04.953 [2024-07-15 10:12:18.443270] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96314 ] 00:31:05.211 [2024-07-15 10:12:18.580136] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:05.211 [2024-07-15 10:12:18.678204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:05.779 10:12:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:05.779 10:12:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:31:05.779 10:12:19 nvmf_tcp.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:31:06.038 10:12:19 nvmf_tcp.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:31:06.298 NVMe0n1 00:31:06.298 10:12:19 nvmf_tcp.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=96356 00:31:06.298 10:12:19 nvmf_tcp.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:06.298 10:12:19 nvmf_tcp.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:31:06.298 Running I/O for 10 seconds... 00:31:07.281 10:12:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:07.543 [2024-07-15 10:12:20.946094] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.543 [2024-07-15 10:12:20.946141] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.543 [2024-07-15 10:12:20.946148] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.543 [2024-07-15 10:12:20.946153] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.543 [2024-07-15 10:12:20.946158] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.543 [2024-07-15 10:12:20.946163] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.543 [2024-07-15 10:12:20.946168] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.543 [2024-07-15 10:12:20.946172] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.543 [2024-07-15 10:12:20.946177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.543 [2024-07-15 10:12:20.946182] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.543 [2024-07-15 10:12:20.946187] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.543 [2024-07-15 10:12:20.946191] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.543 [2024-07-15 10:12:20.946196] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.543 [2024-07-15 10:12:20.946208] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.543 [2024-07-15 10:12:20.946212] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.543 [2024-07-15 10:12:20.946217] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.543 [2024-07-15 10:12:20.946221] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.543 [2024-07-15 10:12:20.946227] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.543 [2024-07-15 10:12:20.946231] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.543 [2024-07-15 10:12:20.946235] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.543 [2024-07-15 10:12:20.946240] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.543 [2024-07-15 10:12:20.946245] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.543 [2024-07-15 10:12:20.946249] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.543 [2024-07-15 10:12:20.946253] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.543 [2024-07-15 10:12:20.946258] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.543 [2024-07-15 10:12:20.946262] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.543 [2024-07-15 10:12:20.946267] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.543 [2024-07-15 10:12:20.946271] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.543 [2024-07-15 10:12:20.946276] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.543 [2024-07-15 10:12:20.946281] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.543 [2024-07-15 10:12:20.946285] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.543 [2024-07-15 10:12:20.946291] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.543 [2024-07-15 10:12:20.946296] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.543 [2024-07-15 10:12:20.946300] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.543 [2024-07-15 10:12:20.946305] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.543 [2024-07-15 10:12:20.946310] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.543 [2024-07-15 10:12:20.946314] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.543 [2024-07-15 10:12:20.946320] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.543 [2024-07-15 10:12:20.946325] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.543 [2024-07-15 10:12:20.946329] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.543 [2024-07-15 10:12:20.946334] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.543 [2024-07-15 10:12:20.946339] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.543 [2024-07-15 10:12:20.946344] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.543 [2024-07-15 10:12:20.946349] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.543 [2024-07-15 10:12:20.946354] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.543 [2024-07-15 10:12:20.946358] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.543 [2024-07-15 10:12:20.946363] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.543 [2024-07-15 10:12:20.946368] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.543 [2024-07-15 10:12:20.946372] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.543 [2024-07-15 10:12:20.946377] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.543 [2024-07-15 10:12:20.946381] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.543 [2024-07-15 10:12:20.946386] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.543 [2024-07-15 10:12:20.946390] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.543 [2024-07-15 10:12:20.946395] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.543 [2024-07-15 10:12:20.946399] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.543 [2024-07-15 10:12:20.946404] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.543 [2024-07-15 10:12:20.946408] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.543 [2024-07-15 10:12:20.946413] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.543 [2024-07-15 10:12:20.946417] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.543 [2024-07-15 10:12:20.946422] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.543 [2024-07-15 10:12:20.946426] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946431] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946435] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946440] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946444] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946449] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946453] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946458] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946463] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946468] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946472] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946478] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946483] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946488] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946493] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946497] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946502] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946507] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946511] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946515] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946520] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946524] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946529] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946534] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946539] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946543] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946548] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946552] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946556] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946561] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946568] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946572] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946577] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946581] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946586] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946591] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946595] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946600] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946604] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946609] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946613] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946618] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946622] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946627] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946632] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946636] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946641] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946645] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946650] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946654] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946670] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946675] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946680] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946684] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946689] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946694] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946699] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946704] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946708] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946713] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946717] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946722] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946728] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946733] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946738] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.946742] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8bb50 is same with the state(5) to be set 00:31:07.544 [2024-07-15 10:12:20.948680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.544 [2024-07-15 10:12:20.948714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.544 [2024-07-15 10:12:20.948730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:104760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.544 [2024-07-15 10:12:20.948737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.544 [2024-07-15 10:12:20.948744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:104768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.544 [2024-07-15 10:12:20.948750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.544 [2024-07-15 10:12:20.948757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:104776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.544 [2024-07-15 10:12:20.948762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.544 [2024-07-15 10:12:20.948769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:104784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.544 [2024-07-15 10:12:20.948775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.544 [2024-07-15 10:12:20.948781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:104792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.544 [2024-07-15 10:12:20.948787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.544 [2024-07-15 10:12:20.948793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:104800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.544 [2024-07-15 10:12:20.948811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.544 [2024-07-15 10:12:20.948819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:104808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.544 [2024-07-15 10:12:20.948824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.544 [2024-07-15 10:12:20.948831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:104816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.544 [2024-07-15 10:12:20.948836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.544 [2024-07-15 10:12:20.948843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:104824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.544 [2024-07-15 10:12:20.948849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.544 [2024-07-15 10:12:20.948856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:104832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.544 [2024-07-15 10:12:20.948861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.544 [2024-07-15 10:12:20.948867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.544 [2024-07-15 10:12:20.948872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-07-15 10:12:20.948879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:104848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.545 [2024-07-15 10:12:20.948885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-07-15 10:12:20.948891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:104856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.545 [2024-07-15 10:12:20.948900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-07-15 10:12:20.948907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:104864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.545 [2024-07-15 10:12:20.948912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-07-15 10:12:20.948919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:104872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.545 [2024-07-15 10:12:20.948925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-07-15 10:12:20.948932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:104880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.545 [2024-07-15 10:12:20.948937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-07-15 10:12:20.948944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:104888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.545 [2024-07-15 10:12:20.948949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-07-15 10:12:20.948956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:104896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.545 [2024-07-15 10:12:20.948961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-07-15 10:12:20.948968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:104904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:07.545 [2024-07-15 10:12:20.948973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-07-15 10:12:20.948980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:104968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.545 [2024-07-15 10:12:20.948986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-07-15 10:12:20.948992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:104976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.545 [2024-07-15 10:12:20.948998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-07-15 10:12:20.949008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:104984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.545 [2024-07-15 10:12:20.949014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-07-15 10:12:20.949020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:104992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.545 [2024-07-15 10:12:20.949026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-07-15 10:12:20.949032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:105000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.545 [2024-07-15 10:12:20.949038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-07-15 10:12:20.949045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:105008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.545 [2024-07-15 10:12:20.949050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-07-15 10:12:20.949057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:105016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.545 [2024-07-15 10:12:20.949062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-07-15 10:12:20.949073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:105024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.545 [2024-07-15 10:12:20.949078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-07-15 10:12:20.949085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:105032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.545 [2024-07-15 10:12:20.949090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-07-15 10:12:20.949100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:105040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.545 [2024-07-15 10:12:20.949105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-07-15 10:12:20.949112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:105048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.545 [2024-07-15 10:12:20.949118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-07-15 10:12:20.949125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:105056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.545 [2024-07-15 10:12:20.949130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-07-15 10:12:20.949136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:105064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.545 [2024-07-15 10:12:20.949142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-07-15 10:12:20.949148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:105072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.545 [2024-07-15 10:12:20.949153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-07-15 10:12:20.949160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:105080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.545 [2024-07-15 10:12:20.949166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-07-15 10:12:20.949172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:105088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.545 [2024-07-15 10:12:20.949177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-07-15 10:12:20.949200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:105096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.545 [2024-07-15 10:12:20.949206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-07-15 10:12:20.949214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:105104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.545 [2024-07-15 10:12:20.949219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-07-15 10:12:20.949226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:105112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.545 [2024-07-15 10:12:20.949231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-07-15 10:12:20.949238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:105120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.545 [2024-07-15 10:12:20.949244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-07-15 10:12:20.949251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:105128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.545 [2024-07-15 10:12:20.949256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-07-15 10:12:20.949270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:105136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.545 [2024-07-15 10:12:20.949276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-07-15 10:12:20.949283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.545 [2024-07-15 10:12:20.949288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-07-15 10:12:20.949295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:105152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.545 [2024-07-15 10:12:20.949300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-07-15 10:12:20.949307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:105160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.545 [2024-07-15 10:12:20.949313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-07-15 10:12:20.949320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:105168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.545 [2024-07-15 10:12:20.949336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-07-15 10:12:20.949343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:105176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.545 [2024-07-15 10:12:20.949350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-07-15 10:12:20.949357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:105184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.545 [2024-07-15 10:12:20.949362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-07-15 10:12:20.949369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:105192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.545 [2024-07-15 10:12:20.949374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-07-15 10:12:20.949381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:105200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.545 [2024-07-15 10:12:20.949387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-07-15 10:12:20.949394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:105208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.545 [2024-07-15 10:12:20.949404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-07-15 10:12:20.949411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:105216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.545 [2024-07-15 10:12:20.949416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-07-15 10:12:20.949423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:105224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.545 [2024-07-15 10:12:20.949428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-07-15 10:12:20.949435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:105232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.545 [2024-07-15 10:12:20.949440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.545 [2024-07-15 10:12:20.949447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:105240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.546 [2024-07-15 10:12:20.949452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.546 [2024-07-15 10:12:20.949459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:105248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.546 [2024-07-15 10:12:20.949468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.546 [2024-07-15 10:12:20.949475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:105256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.546 [2024-07-15 10:12:20.949480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.546 [2024-07-15 10:12:20.949487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:105264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.546 [2024-07-15 10:12:20.949492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.546 [2024-07-15 10:12:20.949499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:105272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.546 [2024-07-15 10:12:20.949504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.546 [2024-07-15 10:12:20.949511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.546 [2024-07-15 10:12:20.949523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.546 [2024-07-15 10:12:20.949530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:105288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.546 [2024-07-15 10:12:20.949536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.546 [2024-07-15 10:12:20.949543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:105296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.546 [2024-07-15 10:12:20.949548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.546 [2024-07-15 10:12:20.949555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:105304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.546 [2024-07-15 10:12:20.949564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.546 [2024-07-15 10:12:20.949571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:105312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.546 [2024-07-15 10:12:20.949577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.546 [2024-07-15 10:12:20.949596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:105320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.546 [2024-07-15 10:12:20.949602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.546 [2024-07-15 10:12:20.949609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:105328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.546 [2024-07-15 10:12:20.949614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.546 [2024-07-15 10:12:20.949621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:105336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.546 [2024-07-15 10:12:20.949626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.546 [2024-07-15 10:12:20.949633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:105344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:07.546 [2024-07-15 10:12:20.949638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.546 [2024-07-15 10:12:20.949672] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:07.546 [2024-07-15 10:12:20.949679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105352 len:8 PRP1 0x0 PRP2 0x0 00:31:07.546 [2024-07-15 10:12:20.949684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.546 [2024-07-15 10:12:20.949693] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:07.546 [2024-07-15 10:12:20.949697] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:07.546 [2024-07-15 10:12:20.949701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105360 len:8 PRP1 0x0 PRP2 0x0 00:31:07.546 [2024-07-15 10:12:20.949707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.546 [2024-07-15 10:12:20.949713] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:07.546 [2024-07-15 10:12:20.949722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:07.546 [2024-07-15 10:12:20.949726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105368 len:8 PRP1 0x0 PRP2 0x0 00:31:07.546 [2024-07-15 10:12:20.949731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.546 [2024-07-15 10:12:20.949737] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:07.546 [2024-07-15 10:12:20.949740] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:07.546 [2024-07-15 10:12:20.949745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105376 len:8 PRP1 0x0 PRP2 0x0 00:31:07.546 [2024-07-15 10:12:20.949750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.546 [2024-07-15 10:12:20.949755] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:07.546 [2024-07-15 10:12:20.949759] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:07.546 [2024-07-15 10:12:20.949764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105384 len:8 PRP1 0x0 PRP2 0x0 00:31:07.546 [2024-07-15 10:12:20.949769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.546 [2024-07-15 10:12:20.949774] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:07.546 [2024-07-15 10:12:20.949779] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:07.546 [2024-07-15 10:12:20.949783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105392 len:8 PRP1 0x0 PRP2 0x0 00:31:07.546 [2024-07-15 10:12:20.949788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.546 [2024-07-15 10:12:20.949794] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:07.546 [2024-07-15 10:12:20.949800] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:07.546 [2024-07-15 10:12:20.949804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105400 len:8 PRP1 0x0 PRP2 0x0 00:31:07.546 [2024-07-15 10:12:20.949809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.546 [2024-07-15 10:12:20.949815] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:07.546 [2024-07-15 10:12:20.949819] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:07.546 [2024-07-15 10:12:20.949824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105408 len:8 PRP1 0x0 PRP2 0x0 00:31:07.546 [2024-07-15 10:12:20.949843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.546 [2024-07-15 10:12:20.949849] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:07.546 [2024-07-15 10:12:20.949853] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:07.546 [2024-07-15 10:12:20.949858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105416 len:8 PRP1 0x0 PRP2 0x0 00:31:07.546 [2024-07-15 10:12:20.949863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.546 [2024-07-15 10:12:20.949868] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:07.546 [2024-07-15 10:12:20.949872] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:07.546 [2024-07-15 10:12:20.949877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105424 len:8 PRP1 0x0 PRP2 0x0 00:31:07.546 [2024-07-15 10:12:20.949882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.546 [2024-07-15 10:12:20.949887] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:07.546 [2024-07-15 10:12:20.949891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:07.546 [2024-07-15 10:12:20.949896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105432 len:8 PRP1 0x0 PRP2 0x0 00:31:07.546 [2024-07-15 10:12:20.949901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.546 [2024-07-15 10:12:20.949906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:07.546 [2024-07-15 10:12:20.949910] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:07.546 [2024-07-15 10:12:20.949915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105440 len:8 PRP1 0x0 PRP2 0x0 00:31:07.546 [2024-07-15 10:12:20.949920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.546 [2024-07-15 10:12:20.949937] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:07.546 [2024-07-15 10:12:20.949942] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:07.546 [2024-07-15 10:12:20.949947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105448 len:8 PRP1 0x0 PRP2 0x0 00:31:07.546 [2024-07-15 10:12:20.949952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.546 [2024-07-15 10:12:20.949958] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:07.546 [2024-07-15 10:12:20.949962] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:07.546 [2024-07-15 10:12:20.949966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105456 len:8 PRP1 0x0 PRP2 0x0 00:31:07.546 [2024-07-15 10:12:20.949972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.546 [2024-07-15 10:12:20.949977] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:07.546 [2024-07-15 10:12:20.949981] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:07.546 [2024-07-15 10:12:20.949986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105464 len:8 PRP1 0x0 PRP2 0x0 00:31:07.546 [2024-07-15 10:12:20.949992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.546 [2024-07-15 10:12:20.949997] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:07.546 [2024-07-15 10:12:20.950002] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:07.546 [2024-07-15 10:12:20.950006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105472 len:8 PRP1 0x0 PRP2 0x0 00:31:07.546 [2024-07-15 10:12:20.950011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.546 [2024-07-15 10:12:20.950016] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:07.546 [2024-07-15 10:12:20.950020] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:07.546 [2024-07-15 10:12:20.950028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105480 len:8 PRP1 0x0 PRP2 0x0 00:31:07.547 [2024-07-15 10:12:20.950033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.547 [2024-07-15 10:12:20.950039] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:07.547 [2024-07-15 10:12:20.950043] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:07.547 [2024-07-15 10:12:20.950048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105488 len:8 PRP1 0x0 PRP2 0x0 00:31:07.547 [2024-07-15 10:12:20.950054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.547 [2024-07-15 10:12:20.950059] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:07.547 [2024-07-15 10:12:20.950063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:07.547 [2024-07-15 10:12:20.950068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105496 len:8 PRP1 0x0 PRP2 0x0 00:31:07.547 [2024-07-15 10:12:20.950073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.547 [2024-07-15 10:12:20.950078] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:07.547 [2024-07-15 10:12:20.950082] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:07.547 [2024-07-15 10:12:20.950087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105504 len:8 PRP1 0x0 PRP2 0x0 00:31:07.547 [2024-07-15 10:12:20.950091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.547 [2024-07-15 10:12:20.950097] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:07.547 [2024-07-15 10:12:20.950101] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:07.547 [2024-07-15 10:12:20.950106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105512 len:8 PRP1 0x0 PRP2 0x0 00:31:07.547 [2024-07-15 10:12:20.950117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.547 [2024-07-15 10:12:20.950123] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:07.547 [2024-07-15 10:12:20.950127] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:07.547 [2024-07-15 10:12:20.950135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105520 len:8 PRP1 0x0 PRP2 0x0 00:31:07.547 [2024-07-15 10:12:20.950140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.547 [2024-07-15 10:12:20.950145] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:07.547 [2024-07-15 10:12:20.950149] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:07.547 [2024-07-15 10:12:20.950154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105528 len:8 PRP1 0x0 PRP2 0x0 00:31:07.547 [2024-07-15 10:12:20.950159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.547 [2024-07-15 10:12:20.950164] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:07.547 [2024-07-15 10:12:20.950170] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:07.547 [2024-07-15 10:12:20.950175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105536 len:8 PRP1 0x0 PRP2 0x0 00:31:07.547 [2024-07-15 10:12:20.950180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.547 [2024-07-15 10:12:20.950185] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:07.547 [2024-07-15 10:12:20.950190] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:07.547 [2024-07-15 10:12:20.950194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105544 len:8 PRP1 0x0 PRP2 0x0 00:31:07.547 [2024-07-15 10:12:20.950199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.547 [2024-07-15 10:12:20.950205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:07.547 [2024-07-15 10:12:20.950209] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:07.547 [2024-07-15 10:12:20.950221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105552 len:8 PRP1 0x0 PRP2 0x0 00:31:07.547 [2024-07-15 10:12:20.950227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.547 [2024-07-15 10:12:20.950232] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:07.547 [2024-07-15 10:12:20.950236] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:07.547 [2024-07-15 10:12:20.950241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105560 len:8 PRP1 0x0 PRP2 0x0 00:31:07.547 [2024-07-15 10:12:20.950246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.547 [2024-07-15 10:12:20.950251] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:07.547 [2024-07-15 10:12:20.950257] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:07.547 [2024-07-15 10:12:20.950262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105568 len:8 PRP1 0x0 PRP2 0x0 00:31:07.547 [2024-07-15 10:12:20.950266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.547 [2024-07-15 10:12:20.950272] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:07.547 [2024-07-15 10:12:20.950276] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:07.547 [2024-07-15 10:12:20.950281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105576 len:8 PRP1 0x0 PRP2 0x0 00:31:07.547 [2024-07-15 10:12:20.950287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.547 [2024-07-15 10:12:20.950296] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:07.547 [2024-07-15 10:12:20.950300] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:07.547 [2024-07-15 10:12:20.950305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105584 len:8 PRP1 0x0 PRP2 0x0 00:31:07.547 [2024-07-15 10:12:20.950310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.547 [2024-07-15 10:12:20.950315] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:07.547 [2024-07-15 10:12:20.950319] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:07.547 [2024-07-15 10:12:20.950323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105592 len:8 PRP1 0x0 PRP2 0x0 00:31:07.547 [2024-07-15 10:12:20.950328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.547 [2024-07-15 10:12:20.950334] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:07.547 [2024-07-15 10:12:20.950338] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:07.547 [2024-07-15 10:12:20.950342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105600 len:8 PRP1 0x0 PRP2 0x0 00:31:07.547 [2024-07-15 10:12:20.950349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.547 [2024-07-15 10:12:20.950354] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:07.547 [2024-07-15 10:12:20.950358] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:07.547 [2024-07-15 10:12:20.950363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105608 len:8 PRP1 0x0 PRP2 0x0 00:31:07.547 [2024-07-15 10:12:20.950368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.547 [2024-07-15 10:12:20.950388] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:07.547 [2024-07-15 10:12:20.950393] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:07.547 [2024-07-15 10:12:20.950397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105616 len:8 PRP1 0x0 PRP2 0x0 00:31:07.547 [2024-07-15 10:12:20.950408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.547 [2024-07-15 10:12:20.950414] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:07.547 [2024-07-15 10:12:20.950418] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:07.547 [2024-07-15 10:12:20.950423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105624 len:8 PRP1 0x0 PRP2 0x0 00:31:07.547 [2024-07-15 10:12:20.950427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.547 [2024-07-15 10:12:20.950433] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:07.547 [2024-07-15 10:12:20.950437] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:07.547 [2024-07-15 10:12:20.950441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105632 len:8 PRP1 0x0 PRP2 0x0 00:31:07.547 [2024-07-15 10:12:20.950446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.547 10:12:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:31:07.547 [2024-07-15 10:12:20.974682] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:07.547 [2024-07-15 10:12:20.974728] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:07.548 [2024-07-15 10:12:20.974744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105640 len:8 PRP1 0x0 PRP2 0x0 00:31:07.548 [2024-07-15 10:12:20.974758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.548 [2024-07-15 10:12:20.974769] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:07.548 [2024-07-15 10:12:20.974777] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:07.548 [2024-07-15 10:12:20.974785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105648 len:8 PRP1 0x0 PRP2 0x0 00:31:07.548 [2024-07-15 10:12:20.974795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.548 [2024-07-15 10:12:20.974829] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:07.548 [2024-07-15 10:12:20.974844] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:07.548 [2024-07-15 10:12:20.974853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105656 len:8 PRP1 0x0 PRP2 0x0 00:31:07.548 [2024-07-15 10:12:20.974862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.548 [2024-07-15 10:12:20.974872] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:07.548 [2024-07-15 10:12:20.974880] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:07.548 [2024-07-15 10:12:20.974897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105664 len:8 PRP1 0x0 PRP2 0x0 00:31:07.548 [2024-07-15 10:12:20.974907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.548 [2024-07-15 10:12:20.974918] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:07.548 [2024-07-15 10:12:20.974926] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:07.548 [2024-07-15 10:12:20.974934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105672 len:8 PRP1 0x0 PRP2 0x0 00:31:07.548 [2024-07-15 10:12:20.974944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.548 [2024-07-15 10:12:20.974954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:07.548 [2024-07-15 10:12:20.974962] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:07.548 [2024-07-15 10:12:20.974977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105680 len:8 PRP1 0x0 PRP2 0x0 00:31:07.548 [2024-07-15 10:12:20.974987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.548 [2024-07-15 10:12:20.974997] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:07.548 [2024-07-15 10:12:20.975006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:07.548 [2024-07-15 10:12:20.975014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105688 len:8 PRP1 0x0 PRP2 0x0 00:31:07.548 [2024-07-15 10:12:20.975033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.548 [2024-07-15 10:12:20.975044] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:07.548 [2024-07-15 10:12:20.975052] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:07.548 [2024-07-15 10:12:20.975061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105696 len:8 PRP1 0x0 PRP2 0x0 00:31:07.548 [2024-07-15 10:12:20.975071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.548 [2024-07-15 10:12:20.975088] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:07.548 [2024-07-15 10:12:20.975096] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:07.548 [2024-07-15 10:12:20.975104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105704 len:8 PRP1 0x0 PRP2 0x0 00:31:07.548 [2024-07-15 10:12:20.975114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.548 [2024-07-15 10:12:20.975130] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:07.548 [2024-07-15 10:12:20.975138] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:07.548 [2024-07-15 10:12:20.975146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105712 len:8 PRP1 0x0 PRP2 0x0 00:31:07.548 [2024-07-15 10:12:20.975161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.548 [2024-07-15 10:12:20.975172] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:07.548 [2024-07-15 10:12:20.975180] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:07.548 [2024-07-15 10:12:20.975188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105720 len:8 PRP1 0x0 PRP2 0x0 00:31:07.548 [2024-07-15 10:12:20.975197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.548 [2024-07-15 10:12:20.975213] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:07.548 [2024-07-15 10:12:20.975221] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:07.548 [2024-07-15 10:12:20.975229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105728 len:8 PRP1 0x0 PRP2 0x0 00:31:07.548 [2024-07-15 10:12:20.975238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.548 [2024-07-15 10:12:20.975254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:07.548 [2024-07-15 10:12:20.975262] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:07.548 [2024-07-15 10:12:20.975270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105736 len:8 PRP1 0x0 PRP2 0x0 00:31:07.548 [2024-07-15 10:12:20.975286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.548 [2024-07-15 10:12:20.975297] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:07.548 [2024-07-15 10:12:20.975304] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:07.548 [2024-07-15 10:12:20.975319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105744 len:8 PRP1 0x0 PRP2 0x0 00:31:07.548 [2024-07-15 10:12:20.975329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.548 [2024-07-15 10:12:20.975339] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:07.548 [2024-07-15 10:12:20.975347] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:07.548 [2024-07-15 10:12:20.975383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105752 len:8 PRP1 0x0 PRP2 0x0 00:31:07.548 [2024-07-15 10:12:20.975401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.548 [2024-07-15 10:12:20.975412] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:07.548 [2024-07-15 10:12:20.975419] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:07.548 [2024-07-15 10:12:20.975433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105760 len:8 PRP1 0x0 PRP2 0x0 00:31:07.548 [2024-07-15 10:12:20.975443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.548 [2024-07-15 10:12:20.975453] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:07.548 [2024-07-15 10:12:20.975461] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:07.548 [2024-07-15 10:12:20.975478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105768 len:8 PRP1 0x0 PRP2 0x0 00:31:07.548 [2024-07-15 10:12:20.975488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.548 [2024-07-15 10:12:20.975498] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:07.548 [2024-07-15 10:12:20.975513] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:07.548 [2024-07-15 10:12:20.975522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104912 len:8 PRP1 0x0 PRP2 0x0 00:31:07.548 [2024-07-15 10:12:20.975532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.548 [2024-07-15 10:12:20.975542] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:07.548 [2024-07-15 10:12:20.975555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:07.548 [2024-07-15 10:12:20.975565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104920 len:8 PRP1 0x0 PRP2 0x0 00:31:07.548 [2024-07-15 10:12:20.975575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.548 [2024-07-15 10:12:20.975585] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:07.548 [2024-07-15 10:12:20.975600] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:07.548 [2024-07-15 10:12:20.975610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104928 len:8 PRP1 0x0 PRP2 0x0 00:31:07.548 [2024-07-15 10:12:20.975619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.548 [2024-07-15 10:12:20.975635] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:07.548 [2024-07-15 10:12:20.975644] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:07.548 [2024-07-15 10:12:20.975652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104936 len:8 PRP1 0x0 PRP2 0x0 00:31:07.548 [2024-07-15 10:12:20.975685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.548 [2024-07-15 10:12:20.975696] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:07.548 [2024-07-15 10:12:20.975709] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:07.548 [2024-07-15 10:12:20.975718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104944 len:8 PRP1 0x0 PRP2 0x0 00:31:07.548 [2024-07-15 10:12:20.975728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.548 [2024-07-15 10:12:20.975745] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:07.548 [2024-07-15 10:12:20.975752] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:07.548 [2024-07-15 10:12:20.975762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104952 len:8 PRP1 0x0 PRP2 0x0 00:31:07.548 [2024-07-15 10:12:20.975780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.548 [2024-07-15 10:12:20.975792] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:07.548 [2024-07-15 10:12:20.975800] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:07.548 [2024-07-15 10:12:20.975809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104960 len:8 PRP1 0x0 PRP2 0x0 00:31:07.548 [2024-07-15 10:12:20.975824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.548 [2024-07-15 10:12:20.975910] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x19868d0 was disconnected and freed. reset controller. 00:31:07.548 [2024-07-15 10:12:20.976046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:07.548 [2024-07-15 10:12:20.976069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.548 [2024-07-15 10:12:20.976084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:07.549 [2024-07-15 10:12:20.976094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.549 [2024-07-15 10:12:20.976106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:07.549 [2024-07-15 10:12:20.976125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.549 [2024-07-15 10:12:20.976136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:07.549 [2024-07-15 10:12:20.976153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.549 [2024-07-15 10:12:20.976164] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1919240 is same with the state(5) to be set 00:31:07.549 [2024-07-15 10:12:20.976560] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:07.549 [2024-07-15 10:12:20.976594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1919240 (9): Bad file descriptor 00:31:07.549 [2024-07-15 10:12:20.976729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.549 [2024-07-15 10:12:20.976758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1919240 with addr=10.0.0.2, port=4420 00:31:07.549 [2024-07-15 10:12:20.976769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1919240 is same with the state(5) to be set 00:31:07.549 [2024-07-15 10:12:20.976788] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1919240 (9): Bad file descriptor 00:31:07.549 [2024-07-15 10:12:20.976814] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:07.549 [2024-07-15 10:12:20.976830] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:07.549 [2024-07-15 10:12:20.976847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:07.549 [2024-07-15 10:12:20.976894] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:07.549 [2024-07-15 10:12:20.976919] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.486 10:12:21 nvmf_tcp.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:08.486 [2024-07-15 10:12:21.975111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.486 [2024-07-15 10:12:21.975151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1919240 with addr=10.0.0.2, port=4420 00:31:08.486 [2024-07-15 10:12:21.975161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1919240 is same with the state(5) to be set 00:31:08.486 [2024-07-15 10:12:21.975195] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1919240 (9): Bad file descriptor 00:31:08.486 [2024-07-15 10:12:21.975207] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:08.486 [2024-07-15 10:12:21.975213] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:08.486 [2024-07-15 10:12:21.975220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:08.486 [2024-07-15 10:12:21.975238] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.486 [2024-07-15 10:12:21.975245] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:08.745 [2024-07-15 10:12:22.154058] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:08.745 10:12:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@92 -- # wait 96356 00:31:09.680 [2024-07-15 10:12:22.989986] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:17.804 00:31:17.804 Latency(us) 00:31:17.804 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:17.804 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:31:17.804 Verification LBA range: start 0x0 length 0x4000 00:31:17.804 NVMe0n1 : 10.01 8457.08 33.04 0.00 0.00 15117.21 1430.92 3047738.80 00:31:17.804 =================================================================================================================== 00:31:17.804 Total : 8457.08 33.04 0.00 0.00 15117.21 1430.92 3047738.80 00:31:17.804 0 00:31:17.804 10:12:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=96478 00:31:17.804 10:12:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:17.804 10:12:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:31:17.804 Running I/O for 10 seconds... 00:31:17.804 10:12:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:17.804 [2024-07-15 10:12:31.045854] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.804 [2024-07-15 10:12:31.045905] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.804 [2024-07-15 10:12:31.045912] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.804 [2024-07-15 10:12:31.045917] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.804 [2024-07-15 10:12:31.045922] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.804 [2024-07-15 10:12:31.045927] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.804 [2024-07-15 10:12:31.045932] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.804 [2024-07-15 10:12:31.045937] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.804 [2024-07-15 10:12:31.045942] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.804 [2024-07-15 10:12:31.045947] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.804 [2024-07-15 10:12:31.045952] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.804 [2024-07-15 10:12:31.045957] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.804 [2024-07-15 10:12:31.045961] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.804 [2024-07-15 10:12:31.045966] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.804 [2024-07-15 10:12:31.045970] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.804 [2024-07-15 10:12:31.045975] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.804 [2024-07-15 10:12:31.045979] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.804 [2024-07-15 10:12:31.045984] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.804 [2024-07-15 10:12:31.045989] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.804 [2024-07-15 10:12:31.045993] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.804 [2024-07-15 10:12:31.045998] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.804 [2024-07-15 10:12:31.046002] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.804 [2024-07-15 10:12:31.046007] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.804 [2024-07-15 10:12:31.046011] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.804 [2024-07-15 10:12:31.046015] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.804 [2024-07-15 10:12:31.046020] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.804 [2024-07-15 10:12:31.046025] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.804 [2024-07-15 10:12:31.046030] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.804 [2024-07-15 10:12:31.046035] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.804 [2024-07-15 10:12:31.046039] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.804 [2024-07-15 10:12:31.046044] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.804 [2024-07-15 10:12:31.046048] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.804 [2024-07-15 10:12:31.046052] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.804 [2024-07-15 10:12:31.046057] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.804 [2024-07-15 10:12:31.046062] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.804 [2024-07-15 10:12:31.046068] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.804 [2024-07-15 10:12:31.046074] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.804 [2024-07-15 10:12:31.046080] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.804 [2024-07-15 10:12:31.046085] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.804 [2024-07-15 10:12:31.046091] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.805 [2024-07-15 10:12:31.046097] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.805 [2024-07-15 10:12:31.046103] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.805 [2024-07-15 10:12:31.046110] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.805 [2024-07-15 10:12:31.046116] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.805 [2024-07-15 10:12:31.046122] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.805 [2024-07-15 10:12:31.046128] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.805 [2024-07-15 10:12:31.046135] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.805 [2024-07-15 10:12:31.046142] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.805 [2024-07-15 10:12:31.046148] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.805 [2024-07-15 10:12:31.046154] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.805 [2024-07-15 10:12:31.046164] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.805 [2024-07-15 10:12:31.046171] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.805 [2024-07-15 10:12:31.046175] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.805 [2024-07-15 10:12:31.046180] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.805 [2024-07-15 10:12:31.046185] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.805 [2024-07-15 10:12:31.046190] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.805 [2024-07-15 10:12:31.046195] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.805 [2024-07-15 10:12:31.046201] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.805 [2024-07-15 10:12:31.046209] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.805 [2024-07-15 10:12:31.046216] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.805 [2024-07-15 10:12:31.046221] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.805 [2024-07-15 10:12:31.046225] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.805 [2024-07-15 10:12:31.046230] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.805 [2024-07-15 10:12:31.046235] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.805 [2024-07-15 10:12:31.046240] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.805 [2024-07-15 10:12:31.046246] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.805 [2024-07-15 10:12:31.046251] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.805 [2024-07-15 10:12:31.046258] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.805 [2024-07-15 10:12:31.046265] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.805 [2024-07-15 10:12:31.046271] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.805 [2024-07-15 10:12:31.046276] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.805 [2024-07-15 10:12:31.046281] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce4660 is same with the state(5) to be set 00:31:17.805 [2024-07-15 10:12:31.047745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:102960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.805 [2024-07-15 10:12:31.047780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.805 [2024-07-15 10:12:31.047796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:102968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.805 [2024-07-15 10:12:31.047802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.805 [2024-07-15 10:12:31.047811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:102976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.805 [2024-07-15 10:12:31.047817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.805 [2024-07-15 10:12:31.047824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:102984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.805 [2024-07-15 10:12:31.047829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.805 [2024-07-15 10:12:31.047836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:102992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.805 [2024-07-15 10:12:31.047842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.805 [2024-07-15 10:12:31.047849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:103000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.805 [2024-07-15 10:12:31.047855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.805 [2024-07-15 10:12:31.047861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:103008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.805 [2024-07-15 10:12:31.047866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.805 [2024-07-15 10:12:31.047873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:103016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.805 [2024-07-15 10:12:31.047879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.805 [2024-07-15 10:12:31.047897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:103024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.805 [2024-07-15 10:12:31.047903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.805 [2024-07-15 10:12:31.047910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:103032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.805 [2024-07-15 10:12:31.047916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.805 [2024-07-15 10:12:31.047923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:103040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.805 [2024-07-15 10:12:31.047933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.805 [2024-07-15 10:12:31.047941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:103048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.805 [2024-07-15 10:12:31.047946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.805 [2024-07-15 10:12:31.047953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:103056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.805 [2024-07-15 10:12:31.047959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.805 [2024-07-15 10:12:31.047965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:103064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.805 [2024-07-15 10:12:31.047971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.805 [2024-07-15 10:12:31.047983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:103072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.805 [2024-07-15 10:12:31.047989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.805 [2024-07-15 10:12:31.047996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:103080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.805 [2024-07-15 10:12:31.048001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.805 [2024-07-15 10:12:31.048009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.805 [2024-07-15 10:12:31.048014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.805 [2024-07-15 10:12:31.048021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:103096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.805 [2024-07-15 10:12:31.048033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.805 [2024-07-15 10:12:31.048039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:103104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.805 [2024-07-15 10:12:31.048045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.805 [2024-07-15 10:12:31.048052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:103112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.805 [2024-07-15 10:12:31.048057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.805 [2024-07-15 10:12:31.048064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.805 [2024-07-15 10:12:31.048069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.805 [2024-07-15 10:12:31.048075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:103128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.805 [2024-07-15 10:12:31.048087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.805 [2024-07-15 10:12:31.048094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:103136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.805 [2024-07-15 10:12:31.048099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.805 [2024-07-15 10:12:31.048106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:103144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.805 [2024-07-15 10:12:31.048111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.805 [2024-07-15 10:12:31.048129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:103152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.805 [2024-07-15 10:12:31.048135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.805 [2024-07-15 10:12:31.048142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:103160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.805 [2024-07-15 10:12:31.048147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.805 [2024-07-15 10:12:31.048154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:103168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.805 [2024-07-15 10:12:31.048159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.805 [2024-07-15 10:12:31.048171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:103176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.805 [2024-07-15 10:12:31.048177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.805 [2024-07-15 10:12:31.048184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:103184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.805 [2024-07-15 10:12:31.048190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.805 [2024-07-15 10:12:31.048196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:103192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.805 [2024-07-15 10:12:31.048203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.805 [2024-07-15 10:12:31.048210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:103200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.805 [2024-07-15 10:12:31.048221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.805 [2024-07-15 10:12:31.048228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:103208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.805 [2024-07-15 10:12:31.048233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.805 [2024-07-15 10:12:31.048240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:103216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.805 [2024-07-15 10:12:31.048245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.805 [2024-07-15 10:12:31.048253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:103224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.805 [2024-07-15 10:12:31.048263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.805 [2024-07-15 10:12:31.048270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:103232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.805 [2024-07-15 10:12:31.048277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.805 [2024-07-15 10:12:31.048284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:103240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.805 [2024-07-15 10:12:31.048290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.805 [2024-07-15 10:12:31.048296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:103248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.805 [2024-07-15 10:12:31.048301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.805 [2024-07-15 10:12:31.048308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:103256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.805 [2024-07-15 10:12:31.048314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.805 [2024-07-15 10:12:31.048320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:103264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.805 [2024-07-15 10:12:31.048326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.805 [2024-07-15 10:12:31.048332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:103272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.805 [2024-07-15 10:12:31.048337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.805 [2024-07-15 10:12:31.048344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:103280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.805 [2024-07-15 10:12:31.048363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.805 [2024-07-15 10:12:31.048370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:103288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.805 [2024-07-15 10:12:31.048375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.805 [2024-07-15 10:12:31.048397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:103296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.805 [2024-07-15 10:12:31.048403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.805 [2024-07-15 10:12:31.048410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:103304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.805 [2024-07-15 10:12:31.048415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.805 [2024-07-15 10:12:31.048422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:103312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.805 [2024-07-15 10:12:31.048427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.805 [2024-07-15 10:12:31.048434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.805 [2024-07-15 10:12:31.048439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.805 [2024-07-15 10:12:31.048446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:103328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.805 [2024-07-15 10:12:31.048456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.805 [2024-07-15 10:12:31.048464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:103336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.805 [2024-07-15 10:12:31.048469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.806 [2024-07-15 10:12:31.048476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:103352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:17.806 [2024-07-15 10:12:31.048482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.806 [2024-07-15 10:12:31.048489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:103360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:17.806 [2024-07-15 10:12:31.048494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.806 [2024-07-15 10:12:31.048506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:103368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:17.806 [2024-07-15 10:12:31.048511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.806 [2024-07-15 10:12:31.048518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:103376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:17.806 [2024-07-15 10:12:31.048523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.806 [2024-07-15 10:12:31.048530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:103384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:17.806 [2024-07-15 10:12:31.048535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.806 [2024-07-15 10:12:31.048542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:103392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:17.806 [2024-07-15 10:12:31.048547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.806 [2024-07-15 10:12:31.048554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:103400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:17.806 [2024-07-15 10:12:31.048564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.806 [2024-07-15 10:12:31.048571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:103408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:17.806 [2024-07-15 10:12:31.048577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.806 [2024-07-15 10:12:31.048584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:103416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:17.806 [2024-07-15 10:12:31.048589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.806 [2024-07-15 10:12:31.048597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:103424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:17.806 [2024-07-15 10:12:31.048607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.806 [2024-07-15 10:12:31.048614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:103432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:17.806 [2024-07-15 10:12:31.048619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.806 [2024-07-15 10:12:31.048625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:103440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:17.806 [2024-07-15 10:12:31.048631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.806 [2024-07-15 10:12:31.048637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:103448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:17.806 [2024-07-15 10:12:31.048643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.806 [2024-07-15 10:12:31.048649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:17.806 [2024-07-15 10:12:31.048654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.806 [2024-07-15 10:12:31.048674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:103464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:17.806 [2024-07-15 10:12:31.048681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.806 [2024-07-15 10:12:31.048688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:103344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.806 [2024-07-15 10:12:31.048693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.806 [2024-07-15 10:12:31.048700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:103472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:17.806 [2024-07-15 10:12:31.048706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.806 [2024-07-15 10:12:31.048713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:103480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:17.806 [2024-07-15 10:12:31.048718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.806 [2024-07-15 10:12:31.048725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:103488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:17.806 [2024-07-15 10:12:31.048730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.806 [2024-07-15 10:12:31.048736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:103496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:17.806 [2024-07-15 10:12:31.048743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.806 [2024-07-15 10:12:31.048754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:17.806 [2024-07-15 10:12:31.048760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.806 [2024-07-15 10:12:31.048766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:103512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:17.806 [2024-07-15 10:12:31.048772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.806 [2024-07-15 10:12:31.048778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:103520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:17.806 [2024-07-15 10:12:31.048784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.806 [2024-07-15 10:12:31.048791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:17.806 [2024-07-15 10:12:31.048796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.806 [2024-07-15 10:12:31.048806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:103536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:17.806 [2024-07-15 10:12:31.048812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.806 [2024-07-15 10:12:31.048818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:103544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:17.806 [2024-07-15 10:12:31.048824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.806 [2024-07-15 10:12:31.048830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:103552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:17.806 [2024-07-15 10:12:31.048835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.806 [2024-07-15 10:12:31.048842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:103560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:17.806 [2024-07-15 10:12:31.048847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.806 [2024-07-15 10:12:31.048868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:103568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:17.806 [2024-07-15 10:12:31.048874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.806 [2024-07-15 10:12:31.048881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:103576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:17.806 [2024-07-15 10:12:31.048887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.806 [2024-07-15 10:12:31.048894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:103584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:17.806 [2024-07-15 10:12:31.048899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.806 [2024-07-15 10:12:31.048907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:103592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:17.806 [2024-07-15 10:12:31.048912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.806 [2024-07-15 10:12:31.048919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:103600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:17.806 [2024-07-15 10:12:31.048929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.806 [2024-07-15 10:12:31.048936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:103608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:17.806 [2024-07-15 10:12:31.048942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.806 [2024-07-15 10:12:31.048949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:103616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:17.806 [2024-07-15 10:12:31.048954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.806 [2024-07-15 10:12:31.048961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:103624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:17.806 [2024-07-15 10:12:31.048966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.806 [2024-07-15 10:12:31.048978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:103632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:17.806 [2024-07-15 10:12:31.048983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.806 [2024-07-15 10:12:31.048990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:103640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:17.806 [2024-07-15 10:12:31.048996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.806 [2024-07-15 10:12:31.049004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:103648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:17.806 [2024-07-15 10:12:31.049010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.806 [2024-07-15 10:12:31.049016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:103656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:17.806 [2024-07-15 10:12:31.049021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.806 [2024-07-15 10:12:31.049028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:103664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:17.806 [2024-07-15 10:12:31.049038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.806 [2024-07-15 10:12:31.049044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:103672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:17.806 [2024-07-15 10:12:31.049050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.806 [2024-07-15 10:12:31.049056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:103680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:17.806 [2024-07-15 10:12:31.049061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.806 [2024-07-15 10:12:31.049068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:103688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:17.806 [2024-07-15 10:12:31.049079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.806 [2024-07-15 10:12:31.049086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:17.806 [2024-07-15 10:12:31.049091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.806 [2024-07-15 10:12:31.049098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:103704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:17.806 [2024-07-15 10:12:31.049104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.806 [2024-07-15 10:12:31.049110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:103712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:17.806 [2024-07-15 10:12:31.049115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.806 [2024-07-15 10:12:31.049122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:103720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:17.806 [2024-07-15 10:12:31.049127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.806 [2024-07-15 10:12:31.049139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:103728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:17.806 [2024-07-15 10:12:31.049145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.806 [2024-07-15 10:12:31.049152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:103736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:17.806 [2024-07-15 10:12:31.049157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.806 [2024-07-15 10:12:31.049164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:103744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:17.806 [2024-07-15 10:12:31.049169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.806 [2024-07-15 10:12:31.049176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:103752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:17.806 [2024-07-15 10:12:31.049187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.806 [2024-07-15 10:12:31.049194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:103760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:17.806 [2024-07-15 10:12:31.049199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.806 [2024-07-15 10:12:31.049205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:103768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:17.806 [2024-07-15 10:12:31.049211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.806 [2024-07-15 10:12:31.049217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:103776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:17.806 [2024-07-15 10:12:31.049222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.806 [2024-07-15 10:12:31.049233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:103784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:17.806 [2024-07-15 10:12:31.049239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.806 [2024-07-15 10:12:31.049246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:103792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:17.806 [2024-07-15 10:12:31.049253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.806 [2024-07-15 10:12:31.049259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:103800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:17.806 [2024-07-15 10:12:31.049265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.806 [2024-07-15 10:12:31.049271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:103808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:17.806 [2024-07-15 10:12:31.049281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.806 [2024-07-15 10:12:31.049288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:103816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:17.806 [2024-07-15 10:12:31.049294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.806 [2024-07-15 10:12:31.049300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:103824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:17.806 [2024-07-15 10:12:31.049306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.806 [2024-07-15 10:12:31.049314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:103832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:17.806 [2024-07-15 10:12:31.049319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.807 [2024-07-15 10:12:31.049330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:103840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:17.807 [2024-07-15 10:12:31.049336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.807 [2024-07-15 10:12:31.049343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:103848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:17.807 [2024-07-15 10:12:31.049349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.807 [2024-07-15 10:12:31.049370] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:17.807 [2024-07-15 10:12:31.049382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103856 len:8 PRP1 0x0 PRP2 0x0 00:31:17.807 [2024-07-15 10:12:31.049388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.807 [2024-07-15 10:12:31.049396] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:17.807 [2024-07-15 10:12:31.049402] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:17.807 [2024-07-15 10:12:31.049407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103864 len:8 PRP1 0x0 PRP2 0x0 00:31:17.807 [2024-07-15 10:12:31.049412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.807 [2024-07-15 10:12:31.049417] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:17.807 [2024-07-15 10:12:31.049427] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:17.807 [2024-07-15 10:12:31.049431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103872 len:8 PRP1 0x0 PRP2 0x0 00:31:17.807 [2024-07-15 10:12:31.049437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.807 [2024-07-15 10:12:31.049442] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:17.807 [2024-07-15 10:12:31.049446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:17.807 [2024-07-15 10:12:31.049451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103880 len:8 PRP1 0x0 PRP2 0x0 00:31:17.807 [2024-07-15 10:12:31.049456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.807 [2024-07-15 10:12:31.049461] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:17.807 [2024-07-15 10:12:31.049465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:17.807 [2024-07-15 10:12:31.049475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103888 len:8 PRP1 0x0 PRP2 0x0 00:31:17.807 [2024-07-15 10:12:31.049481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.807 [2024-07-15 10:12:31.049487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:17.807 [2024-07-15 10:12:31.049491] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:17.807 [2024-07-15 10:12:31.049495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103896 len:8 PRP1 0x0 PRP2 0x0 00:31:17.807 [2024-07-15 10:12:31.049500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.807 [2024-07-15 10:12:31.049506] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:17.807 [2024-07-15 10:12:31.049510] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:17.807 [2024-07-15 10:12:31.049520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103904 len:8 PRP1 0x0 PRP2 0x0 00:31:17.807 [2024-07-15 10:12:31.049526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.807 [2024-07-15 10:12:31.049532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:17.807 [2024-07-15 10:12:31.049537] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:17.807 [2024-07-15 10:12:31.049541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103912 len:8 PRP1 0x0 PRP2 0x0 00:31:17.807 [2024-07-15 10:12:31.049547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.807 [2024-07-15 10:12:31.049552] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:17.807 [2024-07-15 10:12:31.049556] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:17.807 [2024-07-15 10:12:31.049561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103920 len:8 PRP1 0x0 PRP2 0x0 00:31:17.807 [2024-07-15 10:12:31.049566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.807 [2024-07-15 10:12:31.049576] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:17.807 [2024-07-15 10:12:31.049581] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:17.807 [2024-07-15 10:12:31.049586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103928 len:8 PRP1 0x0 PRP2 0x0 00:31:17.807 [2024-07-15 10:12:31.049591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.807 [2024-07-15 10:12:31.049597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:17.807 [2024-07-15 10:12:31.049602] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:17.807 [2024-07-15 10:12:31.049606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103936 len:8 PRP1 0x0 PRP2 0x0 00:31:17.807 [2024-07-15 10:12:31.049611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.807 [2024-07-15 10:12:31.049621] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:17.807 [2024-07-15 10:12:31.049626] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:17.807 [2024-07-15 10:12:31.049631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103944 len:8 PRP1 0x0 PRP2 0x0 00:31:17.807 [2024-07-15 10:12:31.049636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.807 [2024-07-15 10:12:31.049642] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:17.807 [2024-07-15 10:12:31.049646] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:17.807 [2024-07-15 10:12:31.049650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103952 len:8 PRP1 0x0 PRP2 0x0 00:31:17.807 [2024-07-15 10:12:31.049656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.807 [2024-07-15 10:12:31.049676] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:17.807 [2024-07-15 10:12:31.049680] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:17.807 [2024-07-15 10:12:31.049685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103960 len:8 PRP1 0x0 PRP2 0x0 00:31:17.807 [2024-07-15 10:12:31.049691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.807 10:12:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:31:17.807 [2024-07-15 10:12:31.068536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:17.807 [2024-07-15 10:12:31.068581] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:17.807 [2024-07-15 10:12:31.068594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103968 len:8 PRP1 0x0 PRP2 0x0 00:31:17.807 [2024-07-15 10:12:31.068606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.807 [2024-07-15 10:12:31.068616] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:17.807 [2024-07-15 10:12:31.068623] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:17.807 [2024-07-15 10:12:31.068631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103976 len:8 PRP1 0x0 PRP2 0x0 00:31:17.807 [2024-07-15 10:12:31.068639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.807 [2024-07-15 10:12:31.068717] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x19977b0 was disconnected and freed. reset controller. 00:31:17.807 [2024-07-15 10:12:31.068897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:17.807 [2024-07-15 10:12:31.068923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.807 [2024-07-15 10:12:31.068936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:17.807 [2024-07-15 10:12:31.068945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.807 [2024-07-15 10:12:31.068954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:17.807 [2024-07-15 10:12:31.068962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.807 [2024-07-15 10:12:31.068972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:17.807 [2024-07-15 10:12:31.068981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.807 [2024-07-15 10:12:31.068989] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1919240 is same with the state(5) to be set 00:31:17.807 [2024-07-15 10:12:31.069287] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:17.807 [2024-07-15 10:12:31.069316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1919240 (9): Bad file descriptor 00:31:17.807 [2024-07-15 10:12:31.069418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.807 [2024-07-15 10:12:31.069444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1919240 with addr=10.0.0.2, port=4420 00:31:17.807 [2024-07-15 10:12:31.069454] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1919240 is same with the state(5) to be set 00:31:17.807 [2024-07-15 10:12:31.069469] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1919240 (9): Bad file descriptor 00:31:17.807 [2024-07-15 10:12:31.069484] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:17.807 [2024-07-15 10:12:31.069492] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:17.807 [2024-07-15 10:12:31.069502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:17.807 [2024-07-15 10:12:31.069521] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.807 [2024-07-15 10:12:31.069531] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:18.747 [2024-07-15 10:12:32.067729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.747 [2024-07-15 10:12:32.067774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1919240 with addr=10.0.0.2, port=4420 00:31:18.747 [2024-07-15 10:12:32.067784] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1919240 is same with the state(5) to be set 00:31:18.747 [2024-07-15 10:12:32.067802] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1919240 (9): Bad file descriptor 00:31:18.747 [2024-07-15 10:12:32.067813] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:18.747 [2024-07-15 10:12:32.067818] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:18.747 [2024-07-15 10:12:32.067825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:18.747 [2024-07-15 10:12:32.067842] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:18.747 [2024-07-15 10:12:32.067849] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:19.685 [2024-07-15 10:12:33.066029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.685 [2024-07-15 10:12:33.066075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1919240 with addr=10.0.0.2, port=4420 00:31:19.685 [2024-07-15 10:12:33.066086] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1919240 is same with the state(5) to be set 00:31:19.685 [2024-07-15 10:12:33.066103] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1919240 (9): Bad file descriptor 00:31:19.685 [2024-07-15 10:12:33.066116] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:19.685 [2024-07-15 10:12:33.066121] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:19.685 [2024-07-15 10:12:33.066129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:19.685 [2024-07-15 10:12:33.066147] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:19.685 [2024-07-15 10:12:33.066154] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:20.622 [2024-07-15 10:12:34.066807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.622 [2024-07-15 10:12:34.066873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1919240 with addr=10.0.0.2, port=4420 00:31:20.622 [2024-07-15 10:12:34.066883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1919240 is same with the state(5) to be set 00:31:20.622 [2024-07-15 10:12:34.067061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1919240 (9): Bad file descriptor 00:31:20.622 [2024-07-15 10:12:34.067262] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:20.622 [2024-07-15 10:12:34.067272] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:20.622 [2024-07-15 10:12:34.067280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:20.622 10:12:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:20.622 [2024-07-15 10:12:34.070073] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:20.622 [2024-07-15 10:12:34.070096] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:20.882 [2024-07-15 10:12:34.251344] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:20.882 10:12:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@103 -- # wait 96478 00:31:21.822 [2024-07-15 10:12:35.103542] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:27.102 00:31:27.102 Latency(us) 00:31:27.102 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:27.102 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:31:27.102 Verification LBA range: start 0x0 length 0x4000 00:31:27.102 NVMe0n1 : 10.01 6815.83 26.62 5024.51 0.00 10787.71 427.49 3033086.21 00:31:27.102 =================================================================================================================== 00:31:27.102 Total : 6815.83 26.62 5024.51 0.00 10787.71 0.00 3033086.21 00:31:27.102 0 00:31:27.102 10:12:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 96314 00:31:27.102 10:12:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 96314 ']' 00:31:27.102 10:12:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 96314 00:31:27.102 10:12:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:31:27.102 10:12:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:27.102 10:12:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96314 00:31:27.102 killing process with pid 96314 00:31:27.102 Received shutdown signal, test time was about 10.000000 seconds 00:31:27.102 00:31:27.102 Latency(us) 00:31:27.102 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:27.102 =================================================================================================================== 00:31:27.102 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:27.102 10:12:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:31:27.102 10:12:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:31:27.102 10:12:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96314' 00:31:27.102 10:12:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 96314 00:31:27.102 10:12:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 96314 00:31:27.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:27.102 10:12:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:31:27.102 10:12:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=96603 00:31:27.102 10:12:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 96603 /var/tmp/bdevperf.sock 00:31:27.102 10:12:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 96603 ']' 00:31:27.102 10:12:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:27.102 10:12:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:27.102 10:12:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:27.102 10:12:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:27.102 10:12:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:31:27.102 [2024-07-15 10:12:40.233704] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:31:27.102 [2024-07-15 10:12:40.233769] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96603 ] 00:31:27.102 [2024-07-15 10:12:40.371004] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:27.102 [2024-07-15 10:12:40.476225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:27.669 10:12:41 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:27.669 10:12:41 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:31:27.669 10:12:41 nvmf_tcp.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96603 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:31:27.669 10:12:41 nvmf_tcp.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=96627 00:31:27.669 10:12:41 nvmf_tcp.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:31:27.928 10:12:41 nvmf_tcp.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:31:28.189 NVMe0n1 00:31:28.189 10:12:41 nvmf_tcp.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=96686 00:31:28.189 10:12:41 nvmf_tcp.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:28.189 10:12:41 nvmf_tcp.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:31:28.448 Running I/O for 10 seconds... 00:31:29.391 10:12:42 nvmf_tcp.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:29.391 [2024-07-15 10:12:42.909765] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce7e00 is same with the state(5) to be set 00:31:29.391 [2024-07-15 10:12:42.909817] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce7e00 is same with the state(5) to be set 00:31:29.391 [2024-07-15 10:12:42.909824] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce7e00 is same with the state(5) to be set 00:31:29.391 [2024-07-15 10:12:42.909830] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce7e00 is same with the state(5) to be set 00:31:29.391 [2024-07-15 10:12:42.909836] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce7e00 is same with the state(5) to be set 00:31:29.391 [2024-07-15 10:12:42.909841] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce7e00 is same with the state(5) to be set 00:31:29.391 [2024-07-15 10:12:42.909846] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce7e00 is same with the state(5) to be set 00:31:29.391 [2024-07-15 10:12:42.909851] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce7e00 is same with the state(5) to be set 00:31:29.391 [2024-07-15 10:12:42.909856] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce7e00 is same with the state(5) to be set 00:31:29.391 [2024-07-15 10:12:42.909861] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce7e00 is same with the state(5) to be set 00:31:29.391 [2024-07-15 10:12:42.909867] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce7e00 is same with the state(5) to be set 00:31:29.391 [2024-07-15 10:12:42.909872] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce7e00 is same with the state(5) to be set 00:31:29.391 [2024-07-15 10:12:42.909878] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce7e00 is same with the state(5) to be set 00:31:29.391 [2024-07-15 10:12:42.909884] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce7e00 is same with the state(5) to be set 00:31:29.391 [2024-07-15 10:12:42.909889] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce7e00 is same with the state(5) to be set 00:31:29.391 [2024-07-15 10:12:42.909894] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce7e00 is same with the state(5) to be set 00:31:29.391 [2024-07-15 10:12:42.909899] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce7e00 is same with the state(5) to be set 00:31:29.391 [2024-07-15 10:12:42.909904] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce7e00 is same with the state(5) to be set 00:31:29.391 [2024-07-15 10:12:42.909909] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce7e00 is same with the state(5) to be set 00:31:29.391 [2024-07-15 10:12:42.909914] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce7e00 is same with the state(5) to be set 00:31:29.391 [2024-07-15 10:12:42.909919] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce7e00 is same with the state(5) to be set 00:31:29.391 [2024-07-15 10:12:42.909924] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce7e00 is same with the state(5) to be set 00:31:29.391 [2024-07-15 10:12:42.909929] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce7e00 is same with the state(5) to be set 00:31:29.391 [2024-07-15 10:12:42.909934] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce7e00 is same with the state(5) to be set 00:31:29.391 [2024-07-15 10:12:42.909938] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce7e00 is same with the state(5) to be set 00:31:29.391 [2024-07-15 10:12:42.909943] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce7e00 is same with the state(5) to be set 00:31:29.391 [2024-07-15 10:12:42.910182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:40056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.391 [2024-07-15 10:12:42.910221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.391 [2024-07-15 10:12:42.910239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:123216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.391 [2024-07-15 10:12:42.910245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.391 [2024-07-15 10:12:42.910254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:90432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.391 [2024-07-15 10:12:42.910260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.391 [2024-07-15 10:12:42.910268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:83008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.391 [2024-07-15 10:12:42.910274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.391 [2024-07-15 10:12:42.910283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:126432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.391 [2024-07-15 10:12:42.910288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.391 [2024-07-15 10:12:42.910296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:9336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.391 [2024-07-15 10:12:42.910310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.391 [2024-07-15 10:12:42.910318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:106360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.391 [2024-07-15 10:12:42.910324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.391 [2024-07-15 10:12:42.910339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:115272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.391 [2024-07-15 10:12:42.910346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.391 [2024-07-15 10:12:42.910354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:123000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.391 [2024-07-15 10:12:42.910360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.392 [2024-07-15 10:12:42.910368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:57456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.392 [2024-07-15 10:12:42.910380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.392 [2024-07-15 10:12:42.910389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:117040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.392 [2024-07-15 10:12:42.910395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.392 [2024-07-15 10:12:42.910403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:41176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.392 [2024-07-15 10:12:42.910415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.392 [2024-07-15 10:12:42.910423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:38976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.392 [2024-07-15 10:12:42.910430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.392 [2024-07-15 10:12:42.910437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.392 [2024-07-15 10:12:42.910444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.392 [2024-07-15 10:12:42.910452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:74536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.392 [2024-07-15 10:12:42.910458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.392 [2024-07-15 10:12:42.910465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:40608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.392 [2024-07-15 10:12:42.910471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.392 [2024-07-15 10:12:42.910479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:40640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.392 [2024-07-15 10:12:42.910496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.392 [2024-07-15 10:12:42.910505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.392 [2024-07-15 10:12:42.910517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.392 [2024-07-15 10:12:42.910525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:73648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.392 [2024-07-15 10:12:42.910532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.392 [2024-07-15 10:12:42.910540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:64440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.392 [2024-07-15 10:12:42.910546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.392 [2024-07-15 10:12:42.910554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:119536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.392 [2024-07-15 10:12:42.910567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.392 [2024-07-15 10:12:42.910575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:57192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.392 [2024-07-15 10:12:42.910582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.392 [2024-07-15 10:12:42.910590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:52856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.392 [2024-07-15 10:12:42.910596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.392 [2024-07-15 10:12:42.910609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:10688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.392 [2024-07-15 10:12:42.910616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.392 [2024-07-15 10:12:42.910624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:63848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.392 [2024-07-15 10:12:42.910630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.392 [2024-07-15 10:12:42.910637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:46184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.392 [2024-07-15 10:12:42.910650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.392 [2024-07-15 10:12:42.910666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:117968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.392 [2024-07-15 10:12:42.910674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.392 [2024-07-15 10:12:42.910685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:56304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.392 [2024-07-15 10:12:42.910692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.392 [2024-07-15 10:12:42.910699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:19472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.392 [2024-07-15 10:12:42.910705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.392 [2024-07-15 10:12:42.910713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:13472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.392 [2024-07-15 10:12:42.910718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.392 [2024-07-15 10:12:42.910726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:95608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.392 [2024-07-15 10:12:42.910732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.392 [2024-07-15 10:12:42.910740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:22232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.392 [2024-07-15 10:12:42.910746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.392 [2024-07-15 10:12:42.910753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:117384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.392 [2024-07-15 10:12:42.910759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.392 [2024-07-15 10:12:42.910773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:41856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.392 [2024-07-15 10:12:42.910779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.392 [2024-07-15 10:12:42.910787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:53432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.392 [2024-07-15 10:12:42.910794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.392 [2024-07-15 10:12:42.910801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:57504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.392 [2024-07-15 10:12:42.910813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.392 [2024-07-15 10:12:42.910822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:18888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.392 [2024-07-15 10:12:42.910828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.392 [2024-07-15 10:12:42.910836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:3552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.392 [2024-07-15 10:12:42.910843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.392 [2024-07-15 10:12:42.910856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:74832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.392 [2024-07-15 10:12:42.910863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.392 [2024-07-15 10:12:42.910871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:110832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.392 [2024-07-15 10:12:42.910877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.392 [2024-07-15 10:12:42.910886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:99408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.392 [2024-07-15 10:12:42.910892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.392 [2024-07-15 10:12:42.910900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.392 [2024-07-15 10:12:42.910906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.392 [2024-07-15 10:12:42.910914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:9104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.392 [2024-07-15 10:12:42.910920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.392 [2024-07-15 10:12:42.910927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:90360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.392 [2024-07-15 10:12:42.910933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.392 [2024-07-15 10:12:42.910947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.392 [2024-07-15 10:12:42.910954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.392 [2024-07-15 10:12:42.910962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:79944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.392 [2024-07-15 10:12:42.910968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.392 [2024-07-15 10:12:42.910975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:48944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.392 [2024-07-15 10:12:42.910981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.392 [2024-07-15 10:12:42.910989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:64328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.392 [2024-07-15 10:12:42.910999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.392 [2024-07-15 10:12:42.911007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:37088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.392 [2024-07-15 10:12:42.911013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.392 [2024-07-15 10:12:42.911020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:37272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.392 [2024-07-15 10:12:42.911032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.392 [2024-07-15 10:12:42.911040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:30064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.392 [2024-07-15 10:12:42.911046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.392 [2024-07-15 10:12:42.911054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:60416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.392 [2024-07-15 10:12:42.911060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.393 [2024-07-15 10:12:42.911068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:58216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.393 [2024-07-15 10:12:42.911078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.393 [2024-07-15 10:12:42.911087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.393 [2024-07-15 10:12:42.911092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.393 [2024-07-15 10:12:42.911100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:25120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.393 [2024-07-15 10:12:42.911106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.393 [2024-07-15 10:12:42.911114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:37088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.393 [2024-07-15 10:12:42.911120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.393 [2024-07-15 10:12:42.911133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:111176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.393 [2024-07-15 10:12:42.911139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.393 [2024-07-15 10:12:42.911147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:121552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.393 [2024-07-15 10:12:42.911153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.393 [2024-07-15 10:12:42.911161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.393 [2024-07-15 10:12:42.911170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.393 [2024-07-15 10:12:42.911178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.393 [2024-07-15 10:12:42.911184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.393 [2024-07-15 10:12:42.911191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:58256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.393 [2024-07-15 10:12:42.911197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.393 [2024-07-15 10:12:42.911209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:77064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.393 [2024-07-15 10:12:42.911215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.393 [2024-07-15 10:12:42.911222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:57664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.393 [2024-07-15 10:12:42.911228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.393 [2024-07-15 10:12:42.911236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:42224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.393 [2024-07-15 10:12:42.911243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.393 [2024-07-15 10:12:42.911251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:58040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.393 [2024-07-15 10:12:42.911257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.393 [2024-07-15 10:12:42.911264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.393 [2024-07-15 10:12:42.911270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.393 [2024-07-15 10:12:42.911278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.393 [2024-07-15 10:12:42.911284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.393 [2024-07-15 10:12:42.911291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.393 [2024-07-15 10:12:42.911301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.393 [2024-07-15 10:12:42.911309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:12488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.393 [2024-07-15 10:12:42.911315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.393 [2024-07-15 10:12:42.911323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:68008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.393 [2024-07-15 10:12:42.911329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.393 [2024-07-15 10:12:42.911336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:43216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.393 [2024-07-15 10:12:42.911346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.393 [2024-07-15 10:12:42.911354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.393 [2024-07-15 10:12:42.911360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.393 [2024-07-15 10:12:42.911367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:62528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.393 [2024-07-15 10:12:42.911373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.393 [2024-07-15 10:12:42.911386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:57168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.393 [2024-07-15 10:12:42.911392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.393 [2024-07-15 10:12:42.911400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.393 [2024-07-15 10:12:42.911406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.393 [2024-07-15 10:12:42.911413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:68840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.393 [2024-07-15 10:12:42.911419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.393 [2024-07-15 10:12:42.911430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:108848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.393 [2024-07-15 10:12:42.911436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.393 [2024-07-15 10:12:42.911444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:113632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.393 [2024-07-15 10:12:42.911449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.393 [2024-07-15 10:12:42.911457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.393 [2024-07-15 10:12:42.911467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.393 [2024-07-15 10:12:42.911475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:117960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.393 [2024-07-15 10:12:42.911481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.393 [2024-07-15 10:12:42.911488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:45336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.393 [2024-07-15 10:12:42.911494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.393 [2024-07-15 10:12:42.911501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:111824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.393 [2024-07-15 10:12:42.911512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.393 [2024-07-15 10:12:42.911521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:105936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.393 [2024-07-15 10:12:42.911526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.393 [2024-07-15 10:12:42.911534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:129584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.393 [2024-07-15 10:12:42.911539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.393 [2024-07-15 10:12:42.911547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:123632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.393 [2024-07-15 10:12:42.911557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.393 [2024-07-15 10:12:42.911565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:26856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.393 [2024-07-15 10:12:42.911571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.393 [2024-07-15 10:12:42.911578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:121176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.393 [2024-07-15 10:12:42.911584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.393 [2024-07-15 10:12:42.911591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:114400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.393 [2024-07-15 10:12:42.911597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.393 [2024-07-15 10:12:42.911608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.393 [2024-07-15 10:12:42.911614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.393 [2024-07-15 10:12:42.911622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:102552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.393 [2024-07-15 10:12:42.911628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.393 [2024-07-15 10:12:42.911636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.393 [2024-07-15 10:12:42.911646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.393 [2024-07-15 10:12:42.911654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.393 [2024-07-15 10:12:42.911667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.393 [2024-07-15 10:12:42.911676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:85024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.393 [2024-07-15 10:12:42.911682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.393 [2024-07-15 10:12:42.911697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:69072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.393 [2024-07-15 10:12:42.911703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.393 [2024-07-15 10:12:42.911710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.393 [2024-07-15 10:12:42.911717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.394 [2024-07-15 10:12:42.911724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.394 [2024-07-15 10:12:42.911730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.394 [2024-07-15 10:12:42.911738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:118008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.394 [2024-07-15 10:12:42.911744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.394 [2024-07-15 10:12:42.911751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:48096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.394 [2024-07-15 10:12:42.911762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.394 [2024-07-15 10:12:42.911770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.394 [2024-07-15 10:12:42.911776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.394 [2024-07-15 10:12:42.911784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:28408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.394 [2024-07-15 10:12:42.911790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.394 [2024-07-15 10:12:42.911803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:10976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.394 [2024-07-15 10:12:42.911810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.394 [2024-07-15 10:12:42.911818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:40888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.394 [2024-07-15 10:12:42.911824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.394 [2024-07-15 10:12:42.911831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.394 [2024-07-15 10:12:42.911837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.394 [2024-07-15 10:12:42.911849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.394 [2024-07-15 10:12:42.911856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.394 [2024-07-15 10:12:42.911863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:30800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.394 [2024-07-15 10:12:42.911869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.394 [2024-07-15 10:12:42.911877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:6080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.394 [2024-07-15 10:12:42.911883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.394 [2024-07-15 10:12:42.911891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.394 [2024-07-15 10:12:42.911902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.394 [2024-07-15 10:12:42.911909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:101560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.394 [2024-07-15 10:12:42.911915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.394 [2024-07-15 10:12:42.911923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:73048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.394 [2024-07-15 10:12:42.911929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.394 [2024-07-15 10:12:42.911936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:84944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.394 [2024-07-15 10:12:42.911946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.394 [2024-07-15 10:12:42.911954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:99320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.394 [2024-07-15 10:12:42.911960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.394 [2024-07-15 10:12:42.911968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:113440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.394 [2024-07-15 10:12:42.911974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.394 [2024-07-15 10:12:42.911987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:26464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.394 [2024-07-15 10:12:42.911993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.394 [2024-07-15 10:12:42.912001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:114000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.394 [2024-07-15 10:12:42.912007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.394 [2024-07-15 10:12:42.912014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:41152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.394 [2024-07-15 10:12:42.912021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.394 [2024-07-15 10:12:42.912032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:110136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.394 [2024-07-15 10:12:42.912039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.394 [2024-07-15 10:12:42.912046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:47512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.394 [2024-07-15 10:12:42.912053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.394 [2024-07-15 10:12:42.912060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.394 [2024-07-15 10:12:42.912066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.394 [2024-07-15 10:12:42.912074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:72760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.394 [2024-07-15 10:12:42.912080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.394 [2024-07-15 10:12:42.912088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:113576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.394 [2024-07-15 10:12:42.912105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.394 [2024-07-15 10:12:42.912113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.394 [2024-07-15 10:12:42.912119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.394 [2024-07-15 10:12:42.912127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.394 [2024-07-15 10:12:42.912132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.394 [2024-07-15 10:12:42.912140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:14232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.394 [2024-07-15 10:12:42.912146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.394 [2024-07-15 10:12:42.912157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:72336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.394 [2024-07-15 10:12:42.912164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.394 [2024-07-15 10:12:42.912171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:101832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.394 [2024-07-15 10:12:42.912177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.394 [2024-07-15 10:12:42.912185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:130792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.394 [2024-07-15 10:12:42.912197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.394 [2024-07-15 10:12:42.912204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:102832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.394 [2024-07-15 10:12:42.912210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.394 [2024-07-15 10:12:42.912230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:29.394 [2024-07-15 10:12:42.912239] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:29.394 [2024-07-15 10:12:42.912245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29600 len:8 PRP1 0x0 PRP2 0x0 00:31:29.394 [2024-07-15 10:12:42.912251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:29.394 [2024-07-15 10:12:42.912300] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e5f8d0 was disconnected and freed. reset controller. 00:31:29.394 [2024-07-15 10:12:42.912572] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:29.394 [2024-07-15 10:12:42.912643] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2240 (9): Bad file descriptor 00:31:29.394 [2024-07-15 10:12:42.912740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.394 [2024-07-15 10:12:42.912753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2240 with addr=10.0.0.2, port=4420 00:31:29.394 [2024-07-15 10:12:42.912761] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2240 is same with the state(5) to be set 00:31:29.394 [2024-07-15 10:12:42.912774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2240 (9): Bad file descriptor 00:31:29.394 [2024-07-15 10:12:42.912784] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:29.394 [2024-07-15 10:12:42.912790] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:29.394 [2024-07-15 10:12:42.912797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:29.394 [2024-07-15 10:12:42.912812] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:29.394 [2024-07-15 10:12:42.912818] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:29.394 10:12:42 nvmf_tcp.nvmf_timeout -- host/timeout.sh@128 -- # wait 96686 00:31:31.931 [2024-07-15 10:12:44.909203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.931 [2024-07-15 10:12:44.909267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2240 with addr=10.0.0.2, port=4420 00:31:31.931 [2024-07-15 10:12:44.909278] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2240 is same with the state(5) to be set 00:31:31.931 [2024-07-15 10:12:44.909297] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2240 (9): Bad file descriptor 00:31:31.931 [2024-07-15 10:12:44.909319] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:31.931 [2024-07-15 10:12:44.909326] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:31.931 [2024-07-15 10:12:44.909333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:31.931 [2024-07-15 10:12:44.909353] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:31.931 [2024-07-15 10:12:44.909359] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:33.836 [2024-07-15 10:12:46.905768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.836 [2024-07-15 10:12:46.905847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2240 with addr=10.0.0.2, port=4420 00:31:33.836 [2024-07-15 10:12:46.905858] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2240 is same with the state(5) to be set 00:31:33.836 [2024-07-15 10:12:46.905896] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2240 (9): Bad file descriptor 00:31:33.836 [2024-07-15 10:12:46.905910] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:33.836 [2024-07-15 10:12:46.905916] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:33.836 [2024-07-15 10:12:46.905924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:33.836 [2024-07-15 10:12:46.905945] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:33.836 [2024-07-15 10:12:46.905952] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:35.740 [2024-07-15 10:12:48.902180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:35.740 [2024-07-15 10:12:48.902241] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:35.740 [2024-07-15 10:12:48.902249] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:35.740 [2024-07-15 10:12:48.902256] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:31:35.740 [2024-07-15 10:12:48.902277] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:36.679 00:31:36.680 Latency(us) 00:31:36.680 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:36.680 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:31:36.680 NVMe0n1 : 8.14 3182.98 12.43 15.72 0.00 40076.78 2890.45 7033243.39 00:31:36.680 =================================================================================================================== 00:31:36.680 Total : 3182.98 12.43 15.72 0.00 40076.78 2890.45 7033243.39 00:31:36.680 0 00:31:36.680 10:12:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:31:36.680 Attaching 5 probes... 00:31:36.680 1201.455981: reset bdev controller NVMe0 00:31:36.680 1201.578913: reconnect bdev controller NVMe0 00:31:36.680 3197.967628: reconnect delay bdev controller NVMe0 00:31:36.680 3197.991144: reconnect bdev controller NVMe0 00:31:36.680 5194.502467: reconnect delay bdev controller NVMe0 00:31:36.680 5194.525899: reconnect bdev controller NVMe0 00:31:36.680 7191.045156: reconnect delay bdev controller NVMe0 00:31:36.680 7191.067806: reconnect bdev controller NVMe0 00:31:36.680 10:12:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:31:36.680 10:12:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:31:36.680 10:12:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@136 -- # kill 96627 00:31:36.680 10:12:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:31:36.680 10:12:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 96603 00:31:36.680 10:12:49 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 96603 ']' 00:31:36.680 10:12:49 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 96603 00:31:36.680 10:12:49 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:31:36.680 10:12:49 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:36.680 10:12:49 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96603 00:31:36.680 killing process with pid 96603 00:31:36.680 Received shutdown signal, test time was about 8.216765 seconds 00:31:36.680 00:31:36.680 Latency(us) 00:31:36.680 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:36.680 =================================================================================================================== 00:31:36.680 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:36.680 10:12:49 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:31:36.680 10:12:49 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:31:36.680 10:12:49 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96603' 00:31:36.680 10:12:49 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 96603 00:31:36.680 10:12:49 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 96603 00:31:36.680 10:12:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:36.939 10:12:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:31:36.939 10:12:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:31:36.939 10:12:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:36.939 10:12:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:31:36.939 10:12:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:36.939 10:12:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:31:36.939 10:12:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:36.939 10:12:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:36.939 rmmod nvme_tcp 00:31:36.939 rmmod nvme_fabrics 00:31:36.939 rmmod nvme_keyring 00:31:36.939 10:12:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:36.939 10:12:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:31:36.939 10:12:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:31:36.939 10:12:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 96020 ']' 00:31:36.939 10:12:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 96020 00:31:36.939 10:12:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 96020 ']' 00:31:36.939 10:12:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 96020 00:31:36.939 10:12:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:31:36.939 10:12:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:36.939 10:12:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96020 00:31:36.939 10:12:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:36.939 10:12:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:36.939 10:12:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96020' 00:31:36.939 killing process with pid 96020 00:31:36.939 10:12:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 96020 00:31:36.939 10:12:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 96020 00:31:37.198 10:12:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:37.198 10:12:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:37.198 10:12:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:37.198 10:12:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:37.198 10:12:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:37.198 10:12:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:37.198 10:12:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:37.198 10:12:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:37.198 10:12:50 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:31:37.198 00:31:37.198 real 0m45.986s 00:31:37.198 user 2m15.321s 00:31:37.198 sys 0m4.209s 00:31:37.198 10:12:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:37.198 10:12:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:31:37.198 ************************************ 00:31:37.198 END TEST nvmf_timeout 00:31:37.198 ************************************ 00:31:37.457 10:12:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:37.457 10:12:50 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ virt == phy ]] 00:31:37.457 10:12:50 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:31:37.457 10:12:50 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:37.457 10:12:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:37.457 10:12:50 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:31:37.457 ************************************ 00:31:37.457 END TEST nvmf_tcp 00:31:37.457 ************************************ 00:31:37.457 00:31:37.457 real 14m46.789s 00:31:37.457 user 39m20.655s 00:31:37.457 sys 3m0.927s 00:31:37.457 10:12:50 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:37.457 10:12:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:37.457 10:12:50 -- common/autotest_common.sh@1142 -- # return 0 00:31:37.457 10:12:50 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:31:37.457 10:12:50 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:31:37.457 10:12:50 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:37.457 10:12:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:37.457 10:12:50 -- common/autotest_common.sh@10 -- # set +x 00:31:37.457 ************************************ 00:31:37.457 START TEST spdkcli_nvmf_tcp 00:31:37.457 ************************************ 00:31:37.457 10:12:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:31:37.717 * Looking for test storage... 00:31:37.717 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:31:37.717 10:12:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:31:37.717 10:12:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:31:37.717 10:12:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:31:37.717 10:12:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:37.717 10:12:51 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:31:37.717 10:12:51 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:37.717 10:12:51 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:37.717 10:12:51 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:37.717 10:12:51 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:37.717 10:12:51 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:37.717 10:12:51 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:37.717 10:12:51 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:37.717 10:12:51 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:37.717 10:12:51 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:37.717 10:12:51 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:37.717 10:12:51 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:31:37.717 10:12:51 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:31:37.717 10:12:51 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:37.717 10:12:51 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:37.717 10:12:51 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:37.717 10:12:51 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:37.717 10:12:51 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:37.717 10:12:51 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:37.717 10:12:51 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:37.717 10:12:51 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:37.717 10:12:51 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.717 10:12:51 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.717 10:12:51 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.717 10:12:51 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:31:37.718 10:12:51 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.718 10:12:51 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:31:37.718 10:12:51 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:37.718 10:12:51 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:37.718 10:12:51 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:37.718 10:12:51 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:37.718 10:12:51 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:37.718 10:12:51 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:37.718 10:12:51 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:37.718 10:12:51 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:37.718 10:12:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:31:37.718 10:12:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:31:37.718 10:12:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:31:37.718 10:12:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:31:37.718 10:12:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:37.718 10:12:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:37.718 10:12:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:31:37.718 10:12:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=96902 00:31:37.718 10:12:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 96902 00:31:37.718 10:12:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:31:37.718 10:12:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 96902 ']' 00:31:37.718 10:12:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:37.718 10:12:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:37.718 10:12:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:37.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:37.718 10:12:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:37.718 10:12:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:37.718 [2024-07-15 10:12:51.174128] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:31:37.718 [2024-07-15 10:12:51.174268] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96902 ] 00:31:37.977 [2024-07-15 10:12:51.312817] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:37.977 [2024-07-15 10:12:51.418218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:37.977 [2024-07-15 10:12:51.418218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:38.544 10:12:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:38.544 10:12:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:31:38.544 10:12:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:31:38.544 10:12:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:38.544 10:12:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:38.544 10:12:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:31:38.544 10:12:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:31:38.544 10:12:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:31:38.544 10:12:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:38.544 10:12:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:38.544 10:12:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:31:38.544 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:31:38.544 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:31:38.544 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:31:38.544 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:31:38.544 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:31:38.544 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:31:38.544 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:31:38.544 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:31:38.544 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:31:38.545 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:38.545 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:38.545 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:31:38.545 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:38.545 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:38.545 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:31:38.545 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:38.545 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:31:38.545 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:31:38.545 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:38.545 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:31:38.545 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:31:38.545 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:31:38.545 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:31:38.545 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:38.545 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:31:38.545 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:31:38.545 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:31:38.545 ' 00:31:41.830 [2024-07-15 10:12:54.815542] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:42.767 [2024-07-15 10:12:56.142011] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:31:45.301 [2024-07-15 10:12:58.630944] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:31:47.205 [2024-07-15 10:13:00.772241] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:31:49.109 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:31:49.109 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:31:49.109 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:31:49.109 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:31:49.109 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:31:49.109 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:31:49.109 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:31:49.109 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:49.109 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:31:49.109 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:31:49.109 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:49.109 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:49.109 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:31:49.109 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:49.109 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:49.109 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:31:49.109 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:49.109 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:31:49.109 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:49.109 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:49.109 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:31:49.109 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:31:49.109 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:31:49.109 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:31:49.109 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:49.109 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:31:49.109 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:31:49.109 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:31:49.109 10:13:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:31:49.109 10:13:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:49.109 10:13:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:49.109 10:13:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:31:49.109 10:13:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:49.109 10:13:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:49.109 10:13:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:31:49.109 10:13:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:31:49.676 10:13:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:31:49.676 10:13:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:31:49.676 10:13:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:31:49.676 10:13:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:49.676 10:13:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:49.676 10:13:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:31:49.676 10:13:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:49.676 10:13:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:49.676 10:13:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:31:49.676 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:31:49.676 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:49.676 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:31:49.676 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:31:49.676 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:31:49.676 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:31:49.676 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:49.676 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:31:49.676 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:31:49.676 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:31:49.676 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:31:49.676 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:31:49.676 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:31:49.676 ' 00:31:56.239 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:31:56.239 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:31:56.239 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:56.239 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:31:56.239 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:31:56.240 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:31:56.240 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:31:56.240 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:56.240 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:31:56.240 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:31:56.240 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:31:56.240 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:31:56.240 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:31:56.240 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:31:56.240 10:13:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:31:56.240 10:13:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:56.240 10:13:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:56.240 10:13:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 96902 00:31:56.240 10:13:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 96902 ']' 00:31:56.240 10:13:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 96902 00:31:56.240 10:13:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:31:56.240 10:13:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:56.240 10:13:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96902 00:31:56.240 10:13:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:56.240 10:13:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:56.240 10:13:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96902' 00:31:56.240 killing process with pid 96902 00:31:56.240 10:13:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 96902 00:31:56.240 10:13:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 96902 00:31:56.240 10:13:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:31:56.240 10:13:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:31:56.240 10:13:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 96902 ']' 00:31:56.240 10:13:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 96902 00:31:56.240 10:13:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 96902 ']' 00:31:56.240 10:13:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 96902 00:31:56.240 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (96902) - No such process 00:31:56.240 10:13:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 96902 is not found' 00:31:56.240 Process with pid 96902 is not found 00:31:56.240 10:13:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:31:56.240 10:13:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:31:56.240 10:13:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:31:56.240 00:31:56.240 real 0m17.994s 00:31:56.240 user 0m39.718s 00:31:56.240 sys 0m1.059s 00:31:56.240 10:13:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:56.240 10:13:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:56.240 ************************************ 00:31:56.240 END TEST spdkcli_nvmf_tcp 00:31:56.240 ************************************ 00:31:56.240 10:13:08 -- common/autotest_common.sh@1142 -- # return 0 00:31:56.240 10:13:08 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:56.240 10:13:08 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:56.240 10:13:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:56.240 10:13:08 -- common/autotest_common.sh@10 -- # set +x 00:31:56.240 ************************************ 00:31:56.240 START TEST nvmf_identify_passthru 00:31:56.240 ************************************ 00:31:56.240 10:13:08 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:56.240 * Looking for test storage... 00:31:56.240 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:31:56.240 10:13:09 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:56.240 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:31:56.240 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:56.240 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:56.240 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:56.240 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:56.240 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:56.240 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:56.240 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:56.240 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:56.240 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:56.240 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:56.240 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:31:56.240 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:31:56.240 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:56.240 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:56.240 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:56.240 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:56.240 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:56.240 10:13:09 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:56.240 10:13:09 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:56.240 10:13:09 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:56.240 10:13:09 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.240 10:13:09 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.240 10:13:09 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.240 10:13:09 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:31:56.240 10:13:09 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.240 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:31:56.240 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:56.240 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:56.240 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:56.240 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:56.240 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:56.240 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:56.240 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:56.240 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:56.240 10:13:09 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:56.240 10:13:09 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:56.240 10:13:09 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:56.240 10:13:09 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:56.240 10:13:09 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.240 10:13:09 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.240 10:13:09 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.240 10:13:09 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:31:56.240 10:13:09 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.240 10:13:09 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:31:56.240 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:56.241 10:13:09 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:56.241 10:13:09 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@432 -- # nvmf_veth_init 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:31:56.241 Cannot find device "nvmf_tgt_br" 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@155 -- # true 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:31:56.241 Cannot find device "nvmf_tgt_br2" 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@156 -- # true 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:31:56.241 Cannot find device "nvmf_tgt_br" 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@158 -- # true 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:31:56.241 Cannot find device "nvmf_tgt_br2" 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@159 -- # true 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:56.241 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@162 -- # true 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:56.241 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@163 -- # true 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:31:56.241 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:56.241 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:31:56.241 00:31:56.241 --- 10.0.0.2 ping statistics --- 00:31:56.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:56.241 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:31:56.241 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:56.241 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:31:56.241 00:31:56.241 --- 10.0.0.3 ping statistics --- 00:31:56.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:56.241 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:31:56.241 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:56.241 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:31:56.241 00:31:56.241 --- 10.0.0.1 ping statistics --- 00:31:56.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:56.241 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@433 -- # return 0 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:56.241 10:13:09 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:56.241 10:13:09 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:31:56.241 10:13:09 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:56.241 10:13:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:56.241 10:13:09 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:31:56.241 10:13:09 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:31:56.241 10:13:09 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:31:56.241 10:13:09 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:31:56.241 10:13:09 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:31:56.241 10:13:09 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:31:56.241 10:13:09 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:31:56.241 10:13:09 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:56.241 10:13:09 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:31:56.241 10:13:09 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:31:56.241 10:13:09 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:31:56.241 10:13:09 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:31:56.241 10:13:09 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:31:56.241 10:13:09 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 00:31:56.241 10:13:09 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 00:31:56.241 10:13:09 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:31:56.241 10:13:09 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:31:56.241 10:13:09 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:31:56.241 10:13:09 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:31:56.498 10:13:09 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:31:56.498 10:13:09 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:31:56.498 10:13:09 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:31:56.498 10:13:10 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:31:56.498 10:13:10 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:31:56.499 10:13:10 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:56.499 10:13:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:56.499 10:13:10 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:31:56.499 10:13:10 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:56.499 10:13:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:56.499 10:13:10 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=97407 00:31:56.499 10:13:10 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:31:56.499 10:13:10 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:56.499 10:13:10 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 97407 00:31:56.499 10:13:10 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 97407 ']' 00:31:56.499 10:13:10 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:56.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:56.499 10:13:10 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:56.499 10:13:10 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:56.499 10:13:10 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:56.499 10:13:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:56.756 [2024-07-15 10:13:10.130852] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:31:56.756 [2024-07-15 10:13:10.130940] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:56.756 [2024-07-15 10:13:10.269019] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:57.015 [2024-07-15 10:13:10.376151] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:57.015 [2024-07-15 10:13:10.376277] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:57.015 [2024-07-15 10:13:10.376312] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:57.015 [2024-07-15 10:13:10.376338] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:57.015 [2024-07-15 10:13:10.376353] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:57.015 [2024-07-15 10:13:10.376780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:57.015 [2024-07-15 10:13:10.376591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:57.015 [2024-07-15 10:13:10.376782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:57.015 [2024-07-15 10:13:10.376720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:57.581 10:13:10 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:57.581 10:13:10 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:31:57.581 10:13:10 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:31:57.581 10:13:10 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.581 10:13:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:57.581 10:13:11 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.581 10:13:11 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:31:57.581 10:13:11 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.581 10:13:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:57.581 [2024-07-15 10:13:11.075305] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:31:57.581 10:13:11 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.581 10:13:11 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:57.581 10:13:11 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.581 10:13:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:57.581 [2024-07-15 10:13:11.088850] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:57.581 10:13:11 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.581 10:13:11 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:31:57.581 10:13:11 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:57.581 10:13:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:57.581 10:13:11 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:31:57.581 10:13:11 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.581 10:13:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:57.839 Nvme0n1 00:31:57.839 10:13:11 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.839 10:13:11 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:31:57.839 10:13:11 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.839 10:13:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:57.839 10:13:11 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.839 10:13:11 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:31:57.839 10:13:11 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.839 10:13:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:57.839 10:13:11 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.839 10:13:11 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:57.839 10:13:11 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.839 10:13:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:57.839 [2024-07-15 10:13:11.262549] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:57.839 10:13:11 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.839 10:13:11 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:31:57.839 10:13:11 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.839 10:13:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:57.839 [ 00:31:57.839 { 00:31:57.839 "allow_any_host": true, 00:31:57.839 "hosts": [], 00:31:57.839 "listen_addresses": [], 00:31:57.839 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:57.839 "subtype": "Discovery" 00:31:57.839 }, 00:31:57.839 { 00:31:57.839 "allow_any_host": true, 00:31:57.839 "hosts": [], 00:31:57.839 "listen_addresses": [ 00:31:57.839 { 00:31:57.839 "adrfam": "IPv4", 00:31:57.839 "traddr": "10.0.0.2", 00:31:57.839 "trsvcid": "4420", 00:31:57.839 "trtype": "TCP" 00:31:57.839 } 00:31:57.839 ], 00:31:57.839 "max_cntlid": 65519, 00:31:57.839 "max_namespaces": 1, 00:31:57.839 "min_cntlid": 1, 00:31:57.839 "model_number": "SPDK bdev Controller", 00:31:57.839 "namespaces": [ 00:31:57.839 { 00:31:57.839 "bdev_name": "Nvme0n1", 00:31:57.839 "name": "Nvme0n1", 00:31:57.839 "nguid": "E7038DC7BC68447D852B02DBB00A8F77", 00:31:57.839 "nsid": 1, 00:31:57.839 "uuid": "e7038dc7-bc68-447d-852b-02dbb00a8f77" 00:31:57.839 } 00:31:57.839 ], 00:31:57.839 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:57.839 "serial_number": "SPDK00000000000001", 00:31:57.839 "subtype": "NVMe" 00:31:57.839 } 00:31:57.839 ] 00:31:57.839 10:13:11 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.839 10:13:11 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:57.839 10:13:11 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:31:57.839 10:13:11 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:31:58.098 10:13:11 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:31:58.098 10:13:11 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:58.098 10:13:11 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:31:58.099 10:13:11 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:31:58.357 10:13:11 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:31:58.357 10:13:11 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:31:58.357 10:13:11 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:31:58.357 10:13:11 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:58.357 10:13:11 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.357 10:13:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:58.357 10:13:11 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.357 10:13:11 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:31:58.357 10:13:11 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:31:58.357 10:13:11 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:58.357 10:13:11 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:31:58.357 10:13:11 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:58.357 10:13:11 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:31:58.357 10:13:11 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:58.357 10:13:11 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:58.357 rmmod nvme_tcp 00:31:58.357 rmmod nvme_fabrics 00:31:58.357 rmmod nvme_keyring 00:31:58.357 10:13:11 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:58.357 10:13:11 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:31:58.357 10:13:11 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:31:58.357 10:13:11 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 97407 ']' 00:31:58.357 10:13:11 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 97407 00:31:58.357 10:13:11 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 97407 ']' 00:31:58.357 10:13:11 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 97407 00:31:58.357 10:13:11 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:31:58.357 10:13:11 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:58.357 10:13:11 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 97407 00:31:58.357 10:13:11 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:58.357 10:13:11 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:58.357 killing process with pid 97407 00:31:58.357 10:13:11 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 97407' 00:31:58.357 10:13:11 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 97407 00:31:58.357 10:13:11 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 97407 00:31:58.615 10:13:12 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:58.615 10:13:12 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:58.615 10:13:12 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:58.615 10:13:12 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:58.615 10:13:12 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:58.615 10:13:12 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:58.615 10:13:12 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:58.615 10:13:12 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:58.615 10:13:12 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:31:58.615 00:31:58.615 real 0m3.128s 00:31:58.615 user 0m7.272s 00:31:58.615 sys 0m0.917s 00:31:58.615 10:13:12 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:58.615 10:13:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:58.615 ************************************ 00:31:58.615 END TEST nvmf_identify_passthru 00:31:58.615 ************************************ 00:31:58.615 10:13:12 -- common/autotest_common.sh@1142 -- # return 0 00:31:58.615 10:13:12 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:31:58.615 10:13:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:58.615 10:13:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:58.615 10:13:12 -- common/autotest_common.sh@10 -- # set +x 00:31:58.615 ************************************ 00:31:58.616 START TEST nvmf_dif 00:31:58.616 ************************************ 00:31:58.616 10:13:12 nvmf_dif -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:31:58.874 * Looking for test storage... 00:31:58.874 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:31:58.874 10:13:12 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:58.874 10:13:12 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:31:58.874 10:13:12 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:58.874 10:13:12 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:58.874 10:13:12 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:58.874 10:13:12 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:58.874 10:13:12 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:58.874 10:13:12 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:58.874 10:13:12 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:58.874 10:13:12 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:58.874 10:13:12 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:58.874 10:13:12 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:58.874 10:13:12 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:31:58.874 10:13:12 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:31:58.874 10:13:12 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:58.874 10:13:12 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:58.874 10:13:12 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:58.874 10:13:12 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:58.874 10:13:12 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:58.874 10:13:12 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:58.874 10:13:12 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:58.874 10:13:12 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:58.874 10:13:12 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.874 10:13:12 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.874 10:13:12 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.874 10:13:12 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:31:58.874 10:13:12 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.874 10:13:12 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:31:58.874 10:13:12 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:58.874 10:13:12 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:58.874 10:13:12 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:58.874 10:13:12 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:58.874 10:13:12 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:58.874 10:13:12 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:58.874 10:13:12 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:58.874 10:13:12 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:58.874 10:13:12 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:31:58.874 10:13:12 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:31:58.874 10:13:12 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:31:58.874 10:13:12 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:31:58.874 10:13:12 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:31:58.874 10:13:12 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:58.874 10:13:12 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:58.874 10:13:12 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:58.874 10:13:12 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:58.874 10:13:12 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:58.874 10:13:12 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:58.874 10:13:12 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:58.874 10:13:12 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:58.874 10:13:12 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:31:58.874 10:13:12 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:31:58.874 10:13:12 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:31:58.874 10:13:12 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:31:58.874 10:13:12 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:31:58.874 10:13:12 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:31:58.874 10:13:12 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:58.874 10:13:12 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:58.874 10:13:12 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:31:58.874 10:13:12 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:31:58.874 10:13:12 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:31:58.874 10:13:12 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:31:58.874 10:13:12 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:31:58.874 10:13:12 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:58.874 10:13:12 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:31:58.874 10:13:12 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:31:58.874 10:13:12 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:31:58.874 10:13:12 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:31:58.874 10:13:12 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:31:58.874 10:13:12 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:31:58.874 Cannot find device "nvmf_tgt_br" 00:31:58.874 10:13:12 nvmf_dif -- nvmf/common.sh@155 -- # true 00:31:58.874 10:13:12 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:31:58.874 Cannot find device "nvmf_tgt_br2" 00:31:58.874 10:13:12 nvmf_dif -- nvmf/common.sh@156 -- # true 00:31:58.874 10:13:12 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:31:58.874 10:13:12 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:31:58.874 Cannot find device "nvmf_tgt_br" 00:31:58.874 10:13:12 nvmf_dif -- nvmf/common.sh@158 -- # true 00:31:58.874 10:13:12 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:31:58.874 Cannot find device "nvmf_tgt_br2" 00:31:58.874 10:13:12 nvmf_dif -- nvmf/common.sh@159 -- # true 00:31:58.874 10:13:12 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:31:59.134 10:13:12 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:31:59.134 10:13:12 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:59.134 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:59.134 10:13:12 nvmf_dif -- nvmf/common.sh@162 -- # true 00:31:59.134 10:13:12 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:59.134 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:59.134 10:13:12 nvmf_dif -- nvmf/common.sh@163 -- # true 00:31:59.134 10:13:12 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:31:59.134 10:13:12 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:31:59.134 10:13:12 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:31:59.134 10:13:12 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:31:59.134 10:13:12 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:31:59.134 10:13:12 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:31:59.134 10:13:12 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:31:59.134 10:13:12 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:31:59.134 10:13:12 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:31:59.134 10:13:12 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:31:59.134 10:13:12 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:31:59.134 10:13:12 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:31:59.134 10:13:12 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:31:59.134 10:13:12 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:59.134 10:13:12 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:31:59.134 10:13:12 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:31:59.134 10:13:12 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:31:59.134 10:13:12 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:31:59.134 10:13:12 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:31:59.134 10:13:12 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:31:59.134 10:13:12 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:31:59.134 10:13:12 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:31:59.134 10:13:12 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:59.134 10:13:12 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:31:59.134 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:59.134 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:31:59.134 00:31:59.134 --- 10.0.0.2 ping statistics --- 00:31:59.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:59.134 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:31:59.134 10:13:12 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:31:59.134 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:59.134 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:31:59.134 00:31:59.134 --- 10.0.0.3 ping statistics --- 00:31:59.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:59.134 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:31:59.134 10:13:12 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:31:59.134 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:59.134 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:31:59.134 00:31:59.134 --- 10.0.0.1 ping statistics --- 00:31:59.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:59.134 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:31:59.134 10:13:12 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:59.134 10:13:12 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:31:59.134 10:13:12 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:31:59.134 10:13:12 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:31:59.701 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:59.701 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:31:59.701 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:31:59.701 10:13:13 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:59.701 10:13:13 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:59.701 10:13:13 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:59.701 10:13:13 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:59.701 10:13:13 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:59.701 10:13:13 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:59.701 10:13:13 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:31:59.701 10:13:13 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:31:59.701 10:13:13 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:59.701 10:13:13 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:59.701 10:13:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:59.701 10:13:13 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=97753 00:31:59.701 10:13:13 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:31:59.701 10:13:13 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 97753 00:31:59.701 10:13:13 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 97753 ']' 00:31:59.701 10:13:13 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:59.701 10:13:13 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:59.701 10:13:13 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:59.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:59.701 10:13:13 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:59.701 10:13:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:59.959 [2024-07-15 10:13:13.301724] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:31:59.959 [2024-07-15 10:13:13.301895] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:59.959 [2024-07-15 10:13:13.441721] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:00.238 [2024-07-15 10:13:13.547718] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:00.238 [2024-07-15 10:13:13.547858] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:00.238 [2024-07-15 10:13:13.547897] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:00.238 [2024-07-15 10:13:13.547921] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:00.238 [2024-07-15 10:13:13.547927] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:00.238 [2024-07-15 10:13:13.547956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:00.806 10:13:14 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:00.806 10:13:14 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:32:00.806 10:13:14 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:00.806 10:13:14 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:00.806 10:13:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:00.806 10:13:14 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:00.806 10:13:14 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:32:00.806 10:13:14 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:32:00.806 10:13:14 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.806 10:13:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:00.806 [2024-07-15 10:13:14.228408] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:00.806 10:13:14 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.806 10:13:14 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:32:00.806 10:13:14 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:00.806 10:13:14 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:00.806 10:13:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:00.806 ************************************ 00:32:00.806 START TEST fio_dif_1_default 00:32:00.806 ************************************ 00:32:00.806 10:13:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:32:00.806 10:13:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:32:00.806 10:13:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:32:00.806 10:13:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:32:00.806 10:13:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:32:00.806 10:13:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:32:00.806 10:13:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:00.806 10:13:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.806 10:13:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:00.806 bdev_null0 00:32:00.806 10:13:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.806 10:13:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:00.806 10:13:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.807 10:13:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:00.807 10:13:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.807 10:13:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:00.807 10:13:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.807 10:13:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:00.807 10:13:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.807 10:13:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:00.807 10:13:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.807 10:13:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:00.807 [2024-07-15 10:13:14.292377] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:00.807 10:13:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.807 10:13:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:32:00.807 10:13:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:32:00.807 10:13:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:00.807 10:13:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:32:00.807 10:13:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:32:00.807 10:13:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:00.807 10:13:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:00.807 10:13:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:32:00.807 10:13:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:00.807 { 00:32:00.807 "params": { 00:32:00.807 "name": "Nvme$subsystem", 00:32:00.807 "trtype": "$TEST_TRANSPORT", 00:32:00.807 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:00.807 "adrfam": "ipv4", 00:32:00.807 "trsvcid": "$NVMF_PORT", 00:32:00.807 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:00.807 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:00.807 "hdgst": ${hdgst:-false}, 00:32:00.807 "ddgst": ${ddgst:-false} 00:32:00.807 }, 00:32:00.807 "method": "bdev_nvme_attach_controller" 00:32:00.807 } 00:32:00.807 EOF 00:32:00.807 )") 00:32:00.807 10:13:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:00.807 10:13:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:32:00.807 10:13:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:00.807 10:13:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:32:00.807 10:13:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:00.807 10:13:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:00.807 10:13:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:00.807 10:13:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:32:00.807 10:13:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:00.807 10:13:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:00.807 10:13:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:32:00.807 10:13:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:32:00.807 10:13:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:00.807 10:13:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:32:00.807 10:13:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:32:00.807 10:13:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:00.807 10:13:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:32:00.807 10:13:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:32:00.807 10:13:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:00.807 "params": { 00:32:00.807 "name": "Nvme0", 00:32:00.807 "trtype": "tcp", 00:32:00.807 "traddr": "10.0.0.2", 00:32:00.807 "adrfam": "ipv4", 00:32:00.807 "trsvcid": "4420", 00:32:00.807 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:00.807 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:00.807 "hdgst": false, 00:32:00.807 "ddgst": false 00:32:00.807 }, 00:32:00.807 "method": "bdev_nvme_attach_controller" 00:32:00.807 }' 00:32:00.807 10:13:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:00.807 10:13:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:00.807 10:13:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:00.807 10:13:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:00.807 10:13:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:32:00.807 10:13:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:00.807 10:13:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:00.807 10:13:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:00.807 10:13:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:32:00.807 10:13:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:01.064 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:01.064 fio-3.35 00:32:01.064 Starting 1 thread 00:32:13.351 00:32:13.351 filename0: (groupid=0, jobs=1): err= 0: pid=97842: Mon Jul 15 10:13:25 2024 00:32:13.351 read: IOPS=1172, BW=4691KiB/s (4803kB/s)(46.0MiB/10035msec) 00:32:13.351 slat (nsec): min=5593, max=52265, avg=6742.69, stdev=2755.00 00:32:13.351 clat (usec): min=306, max=42408, avg=3391.74, stdev=10652.41 00:32:13.351 lat (usec): min=312, max=42415, avg=3398.49, stdev=10652.44 00:32:13.351 clat percentiles (usec): 00:32:13.351 | 1.00th=[ 314], 5.00th=[ 326], 10.00th=[ 330], 20.00th=[ 338], 00:32:13.351 | 30.00th=[ 347], 40.00th=[ 351], 50.00th=[ 359], 60.00th=[ 371], 00:32:13.351 | 70.00th=[ 388], 80.00th=[ 400], 90.00th=[ 429], 95.00th=[40633], 00:32:13.351 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:32:13.351 | 99.99th=[42206] 00:32:13.352 bw ( KiB/s): min= 3360, max= 6496, per=100.00%, avg=4705.00, stdev=963.40, samples=20 00:32:13.352 iops : min= 840, max= 1624, avg=1176.25, stdev=240.85, samples=20 00:32:13.352 lat (usec) : 500=92.31%, 750=0.15%, 1000=0.03% 00:32:13.352 lat (msec) : 4=0.03%, 50=7.48% 00:32:13.352 cpu : usr=94.01%, sys=5.37%, ctx=29, majf=0, minf=9 00:32:13.352 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:13.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:13.352 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:13.352 issued rwts: total=11768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:13.352 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:13.352 00:32:13.352 Run status group 0 (all jobs): 00:32:13.352 READ: bw=4691KiB/s (4803kB/s), 4691KiB/s-4691KiB/s (4803kB/s-4803kB/s), io=46.0MiB (48.2MB), run=10035-10035msec 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:13.352 ************************************ 00:32:13.352 END TEST fio_dif_1_default 00:32:13.352 ************************************ 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.352 00:32:13.352 real 0m11.035s 00:32:13.352 user 0m10.093s 00:32:13.352 sys 0m0.850s 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:13.352 10:13:25 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:32:13.352 10:13:25 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:32:13.352 10:13:25 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:13.352 10:13:25 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:13.352 10:13:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:13.352 ************************************ 00:32:13.352 START TEST fio_dif_1_multi_subsystems 00:32:13.352 ************************************ 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:13.352 bdev_null0 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:13.352 [2024-07-15 10:13:25.391432] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:13.352 bdev_null1 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:32:13.352 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:13.352 { 00:32:13.352 "params": { 00:32:13.352 "name": "Nvme$subsystem", 00:32:13.352 "trtype": "$TEST_TRANSPORT", 00:32:13.353 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:13.353 "adrfam": "ipv4", 00:32:13.353 "trsvcid": "$NVMF_PORT", 00:32:13.353 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:13.353 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:13.353 "hdgst": ${hdgst:-false}, 00:32:13.353 "ddgst": ${ddgst:-false} 00:32:13.353 }, 00:32:13.353 "method": "bdev_nvme_attach_controller" 00:32:13.353 } 00:32:13.353 EOF 00:32:13.353 )") 00:32:13.353 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:13.353 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:32:13.353 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:13.353 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:32:13.353 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:13.353 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:13.353 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:13.353 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:32:13.353 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:32:13.353 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:13.353 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:13.353 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:32:13.353 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:32:13.353 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:32:13.353 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:13.353 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:32:13.353 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:13.353 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:13.353 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:13.353 { 00:32:13.353 "params": { 00:32:13.353 "name": "Nvme$subsystem", 00:32:13.353 "trtype": "$TEST_TRANSPORT", 00:32:13.353 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:13.353 "adrfam": "ipv4", 00:32:13.353 "trsvcid": "$NVMF_PORT", 00:32:13.353 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:13.353 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:13.353 "hdgst": ${hdgst:-false}, 00:32:13.353 "ddgst": ${ddgst:-false} 00:32:13.353 }, 00:32:13.353 "method": "bdev_nvme_attach_controller" 00:32:13.353 } 00:32:13.353 EOF 00:32:13.353 )") 00:32:13.353 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:32:13.353 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:32:13.353 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:32:13.353 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:32:13.353 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:32:13.353 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:13.353 "params": { 00:32:13.353 "name": "Nvme0", 00:32:13.353 "trtype": "tcp", 00:32:13.353 "traddr": "10.0.0.2", 00:32:13.353 "adrfam": "ipv4", 00:32:13.353 "trsvcid": "4420", 00:32:13.353 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:13.353 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:13.353 "hdgst": false, 00:32:13.353 "ddgst": false 00:32:13.353 }, 00:32:13.353 "method": "bdev_nvme_attach_controller" 00:32:13.353 },{ 00:32:13.353 "params": { 00:32:13.353 "name": "Nvme1", 00:32:13.353 "trtype": "tcp", 00:32:13.353 "traddr": "10.0.0.2", 00:32:13.353 "adrfam": "ipv4", 00:32:13.353 "trsvcid": "4420", 00:32:13.353 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:13.353 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:13.353 "hdgst": false, 00:32:13.353 "ddgst": false 00:32:13.353 }, 00:32:13.353 "method": "bdev_nvme_attach_controller" 00:32:13.353 }' 00:32:13.353 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:13.353 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:13.353 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:13.353 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:13.353 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:32:13.353 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:13.353 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:13.353 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:13.353 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:32:13.353 10:13:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:13.353 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:13.353 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:13.353 fio-3.35 00:32:13.353 Starting 2 threads 00:32:23.335 00:32:23.335 filename0: (groupid=0, jobs=1): err= 0: pid=98003: Mon Jul 15 10:13:36 2024 00:32:23.335 read: IOPS=202, BW=809KiB/s (828kB/s)(8096KiB/10012msec) 00:32:23.335 slat (nsec): min=5633, max=46668, avg=9408.05, stdev=6624.69 00:32:23.335 clat (usec): min=318, max=42405, avg=19754.82, stdev=20210.83 00:32:23.335 lat (usec): min=324, max=42413, avg=19764.23, stdev=20210.15 00:32:23.335 clat percentiles (usec): 00:32:23.335 | 1.00th=[ 326], 5.00th=[ 343], 10.00th=[ 351], 20.00th=[ 367], 00:32:23.335 | 30.00th=[ 383], 40.00th=[ 404], 50.00th=[ 717], 60.00th=[40633], 00:32:23.335 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:23.335 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:32:23.335 | 99.99th=[42206] 00:32:23.335 bw ( KiB/s): min= 608, max= 1081, per=51.20%, avg=806.26, stdev=136.97, samples=19 00:32:23.335 iops : min= 152, max= 270, avg=201.53, stdev=34.24, samples=19 00:32:23.335 lat (usec) : 500=46.44%, 750=4.84%, 1000=0.69% 00:32:23.335 lat (msec) : 2=0.20%, 50=47.83% 00:32:23.335 cpu : usr=97.53%, sys=2.11%, ctx=9, majf=0, minf=0 00:32:23.335 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:23.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:23.335 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:23.335 issued rwts: total=2024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:23.335 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:23.335 filename1: (groupid=0, jobs=1): err= 0: pid=98004: Mon Jul 15 10:13:36 2024 00:32:23.335 read: IOPS=191, BW=766KiB/s (785kB/s)(7664KiB/10002msec) 00:32:23.335 slat (nsec): min=5644, max=41529, avg=8945.92, stdev=5611.43 00:32:23.335 clat (usec): min=318, max=41445, avg=20852.47, stdev=20228.09 00:32:23.335 lat (usec): min=324, max=41457, avg=20861.42, stdev=20227.22 00:32:23.335 clat percentiles (usec): 00:32:23.335 | 1.00th=[ 326], 5.00th=[ 338], 10.00th=[ 347], 20.00th=[ 359], 00:32:23.335 | 30.00th=[ 379], 40.00th=[ 424], 50.00th=[40633], 60.00th=[40633], 00:32:23.335 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:23.335 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:32:23.335 | 99.99th=[41681] 00:32:23.335 bw ( KiB/s): min= 512, max= 1306, per=48.85%, avg=769.26, stdev=166.30, samples=19 00:32:23.335 iops : min= 128, max= 326, avg=192.26, stdev=41.46, samples=19 00:32:23.335 lat (usec) : 500=41.13%, 750=7.31%, 1000=0.84% 00:32:23.335 lat (msec) : 2=0.21%, 50=50.52% 00:32:23.335 cpu : usr=97.62%, sys=2.04%, ctx=15, majf=0, minf=0 00:32:23.335 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:23.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:23.335 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:23.335 issued rwts: total=1916,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:23.335 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:23.335 00:32:23.335 Run status group 0 (all jobs): 00:32:23.335 READ: bw=1574KiB/s (1612kB/s), 766KiB/s-809KiB/s (785kB/s-828kB/s), io=15.4MiB (16.1MB), run=10002-10012msec 00:32:23.335 10:13:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:32:23.335 10:13:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:32:23.335 10:13:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:32:23.335 10:13:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:23.335 10:13:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:32:23.335 10:13:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:23.335 10:13:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.335 10:13:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:23.335 10:13:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.335 10:13:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:23.335 10:13:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.335 10:13:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:23.335 10:13:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.335 10:13:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:32:23.335 10:13:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:23.335 10:13:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:32:23.335 10:13:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:23.335 10:13:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.335 10:13:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:23.335 10:13:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.335 10:13:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:23.335 10:13:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.335 10:13:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:23.335 ************************************ 00:32:23.335 END TEST fio_dif_1_multi_subsystems 00:32:23.335 ************************************ 00:32:23.335 10:13:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.335 00:32:23.335 real 0m11.222s 00:32:23.335 user 0m20.401s 00:32:23.335 sys 0m0.711s 00:32:23.335 10:13:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:23.335 10:13:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:23.335 10:13:36 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:32:23.335 10:13:36 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:32:23.335 10:13:36 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:23.335 10:13:36 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:23.335 10:13:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:23.335 ************************************ 00:32:23.335 START TEST fio_dif_rand_params 00:32:23.335 ************************************ 00:32:23.335 10:13:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:32:23.335 10:13:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:23.336 bdev_null0 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:23.336 [2024-07-15 10:13:36.672856] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:23.336 { 00:32:23.336 "params": { 00:32:23.336 "name": "Nvme$subsystem", 00:32:23.336 "trtype": "$TEST_TRANSPORT", 00:32:23.336 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:23.336 "adrfam": "ipv4", 00:32:23.336 "trsvcid": "$NVMF_PORT", 00:32:23.336 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:23.336 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:23.336 "hdgst": ${hdgst:-false}, 00:32:23.336 "ddgst": ${ddgst:-false} 00:32:23.336 }, 00:32:23.336 "method": "bdev_nvme_attach_controller" 00:32:23.336 } 00:32:23.336 EOF 00:32:23.336 )") 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:23.336 "params": { 00:32:23.336 "name": "Nvme0", 00:32:23.336 "trtype": "tcp", 00:32:23.336 "traddr": "10.0.0.2", 00:32:23.336 "adrfam": "ipv4", 00:32:23.336 "trsvcid": "4420", 00:32:23.336 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:23.336 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:23.336 "hdgst": false, 00:32:23.336 "ddgst": false 00:32:23.336 }, 00:32:23.336 "method": "bdev_nvme_attach_controller" 00:32:23.336 }' 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:32:23.336 10:13:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:23.336 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:32:23.336 ... 00:32:23.336 fio-3.35 00:32:23.336 Starting 3 threads 00:32:29.963 00:32:29.963 filename0: (groupid=0, jobs=1): err= 0: pid=98160: Mon Jul 15 10:13:42 2024 00:32:29.963 read: IOPS=327, BW=41.0MiB/s (43.0MB/s)(205MiB/5004msec) 00:32:29.963 slat (nsec): min=5994, max=48035, avg=10916.09, stdev=3741.50 00:32:29.963 clat (usec): min=3945, max=50355, avg=9135.75, stdev=4377.49 00:32:29.963 lat (usec): min=3955, max=50364, avg=9146.67, stdev=4377.59 00:32:29.963 clat percentiles (usec): 00:32:29.963 | 1.00th=[ 5604], 5.00th=[ 6783], 10.00th=[ 7701], 20.00th=[ 8094], 00:32:29.963 | 30.00th=[ 8455], 40.00th=[ 8586], 50.00th=[ 8717], 60.00th=[ 8979], 00:32:29.963 | 70.00th=[ 9110], 80.00th=[ 9372], 90.00th=[ 9765], 95.00th=[10159], 00:32:29.963 | 99.00th=[49021], 99.50th=[49546], 99.90th=[50070], 99.95th=[50594], 00:32:29.963 | 99.99th=[50594] 00:32:29.963 bw ( KiB/s): min=33792, max=46080, per=36.76%, avg=41585.78, stdev=4064.10, samples=9 00:32:29.963 iops : min= 264, max= 360, avg=324.89, stdev=31.75, samples=9 00:32:29.963 lat (msec) : 4=0.06%, 10=93.35%, 20=5.49%, 50=0.79%, 100=0.30% 00:32:29.963 cpu : usr=94.42%, sys=4.46%, ctx=8, majf=0, minf=0 00:32:29.963 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:29.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:29.963 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:29.963 issued rwts: total=1640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:29.963 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:29.963 filename0: (groupid=0, jobs=1): err= 0: pid=98161: Mon Jul 15 10:13:42 2024 00:32:29.963 read: IOPS=256, BW=32.0MiB/s (33.6MB/s)(160MiB/5002msec) 00:32:29.963 slat (nsec): min=5737, max=36154, avg=9373.86, stdev=4239.16 00:32:29.963 clat (usec): min=3697, max=14633, avg=11689.47, stdev=1733.91 00:32:29.963 lat (usec): min=3704, max=14648, avg=11698.84, stdev=1734.21 00:32:29.963 clat percentiles (usec): 00:32:29.963 | 1.00th=[ 7046], 5.00th=[ 7504], 10.00th=[ 8225], 20.00th=[11207], 00:32:29.963 | 30.00th=[11469], 40.00th=[11863], 50.00th=[12125], 60.00th=[12256], 00:32:29.963 | 70.00th=[12518], 80.00th=[12911], 90.00th=[13304], 95.00th=[13698], 00:32:29.963 | 99.00th=[14353], 99.50th=[14615], 99.90th=[14615], 99.95th=[14615], 00:32:29.963 | 99.99th=[14615] 00:32:29.964 bw ( KiB/s): min=30720, max=36864, per=28.89%, avg=32682.67, stdev=1846.04, samples=9 00:32:29.964 iops : min= 240, max= 288, avg=255.33, stdev=14.42, samples=9 00:32:29.964 lat (msec) : 4=0.23%, 10=13.35%, 20=86.42% 00:32:29.964 cpu : usr=95.60%, sys=3.46%, ctx=5, majf=0, minf=0 00:32:29.964 IO depths : 1=32.8%, 2=67.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:29.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:29.964 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:29.964 issued rwts: total=1281,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:29.964 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:29.964 filename0: (groupid=0, jobs=1): err= 0: pid=98162: Mon Jul 15 10:13:42 2024 00:32:29.964 read: IOPS=303, BW=37.9MiB/s (39.7MB/s)(191MiB/5030msec) 00:32:29.964 slat (nsec): min=5726, max=39605, avg=10002.40, stdev=3508.19 00:32:29.964 clat (usec): min=4896, max=52705, avg=9875.54, stdev=4214.27 00:32:29.964 lat (usec): min=4904, max=52717, avg=9885.54, stdev=4214.35 00:32:29.964 clat percentiles (usec): 00:32:29.964 | 1.00th=[ 5342], 5.00th=[ 6128], 10.00th=[ 8356], 20.00th=[ 8979], 00:32:29.964 | 30.00th=[ 9372], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[ 9896], 00:32:29.964 | 70.00th=[10028], 80.00th=[10290], 90.00th=[10683], 95.00th=[11207], 00:32:29.964 | 99.00th=[12780], 99.50th=[50070], 99.90th=[51643], 99.95th=[52691], 00:32:29.964 | 99.99th=[52691] 00:32:29.964 bw ( KiB/s): min=34048, max=42752, per=34.44%, avg=38963.20, stdev=2842.51, samples=10 00:32:29.964 iops : min= 266, max= 334, avg=304.40, stdev=22.21, samples=10 00:32:29.964 lat (msec) : 10=69.38%, 20=29.64%, 50=0.39%, 100=0.59% 00:32:29.964 cpu : usr=95.84%, sys=3.14%, ctx=7, majf=0, minf=0 00:32:29.964 IO depths : 1=9.6%, 2=90.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:29.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:29.964 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:29.964 issued rwts: total=1525,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:29.964 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:29.964 00:32:29.964 Run status group 0 (all jobs): 00:32:29.964 READ: bw=110MiB/s (116MB/s), 32.0MiB/s-41.0MiB/s (33.6MB/s-43.0MB/s), io=556MiB (583MB), run=5002-5030msec 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:29.964 bdev_null0 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:29.964 [2024-07-15 10:13:42.751339] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:29.964 bdev_null1 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:29.964 bdev_null2 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:29.964 10:13:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:29.964 { 00:32:29.964 "params": { 00:32:29.964 "name": "Nvme$subsystem", 00:32:29.964 "trtype": "$TEST_TRANSPORT", 00:32:29.964 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:29.964 "adrfam": "ipv4", 00:32:29.964 "trsvcid": "$NVMF_PORT", 00:32:29.964 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:29.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:29.965 "hdgst": ${hdgst:-false}, 00:32:29.965 "ddgst": ${ddgst:-false} 00:32:29.965 }, 00:32:29.965 "method": "bdev_nvme_attach_controller" 00:32:29.965 } 00:32:29.965 EOF 00:32:29.965 )") 00:32:29.965 10:13:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:29.965 10:13:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:29.965 10:13:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:29.965 10:13:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:29.965 10:13:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:29.965 10:13:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:29.965 10:13:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:29.965 10:13:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:32:29.965 10:13:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:29.965 10:13:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:29.965 10:13:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:32:29.965 10:13:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:29.965 10:13:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:29.965 10:13:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:29.965 10:13:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:29.965 10:13:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:32:29.965 10:13:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:29.965 10:13:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:29.965 10:13:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:29.965 { 00:32:29.965 "params": { 00:32:29.965 "name": "Nvme$subsystem", 00:32:29.965 "trtype": "$TEST_TRANSPORT", 00:32:29.965 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:29.965 "adrfam": "ipv4", 00:32:29.965 "trsvcid": "$NVMF_PORT", 00:32:29.965 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:29.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:29.965 "hdgst": ${hdgst:-false}, 00:32:29.965 "ddgst": ${ddgst:-false} 00:32:29.965 }, 00:32:29.965 "method": "bdev_nvme_attach_controller" 00:32:29.965 } 00:32:29.965 EOF 00:32:29.965 )") 00:32:29.965 10:13:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:29.965 10:13:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:32:29.965 10:13:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:29.965 10:13:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:29.965 10:13:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:29.965 10:13:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:29.965 10:13:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:29.965 10:13:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:29.965 { 00:32:29.965 "params": { 00:32:29.965 "name": "Nvme$subsystem", 00:32:29.965 "trtype": "$TEST_TRANSPORT", 00:32:29.965 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:29.965 "adrfam": "ipv4", 00:32:29.965 "trsvcid": "$NVMF_PORT", 00:32:29.965 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:29.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:29.965 "hdgst": ${hdgst:-false}, 00:32:29.965 "ddgst": ${ddgst:-false} 00:32:29.965 }, 00:32:29.965 "method": "bdev_nvme_attach_controller" 00:32:29.965 } 00:32:29.965 EOF 00:32:29.965 )") 00:32:29.965 10:13:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:32:29.965 10:13:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:32:29.965 10:13:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:32:29.965 10:13:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:29.965 "params": { 00:32:29.965 "name": "Nvme0", 00:32:29.965 "trtype": "tcp", 00:32:29.965 "traddr": "10.0.0.2", 00:32:29.965 "adrfam": "ipv4", 00:32:29.965 "trsvcid": "4420", 00:32:29.965 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:29.965 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:29.965 "hdgst": false, 00:32:29.965 "ddgst": false 00:32:29.965 }, 00:32:29.965 "method": "bdev_nvme_attach_controller" 00:32:29.965 },{ 00:32:29.965 "params": { 00:32:29.965 "name": "Nvme1", 00:32:29.965 "trtype": "tcp", 00:32:29.965 "traddr": "10.0.0.2", 00:32:29.965 "adrfam": "ipv4", 00:32:29.965 "trsvcid": "4420", 00:32:29.965 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:29.965 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:29.965 "hdgst": false, 00:32:29.965 "ddgst": false 00:32:29.965 }, 00:32:29.965 "method": "bdev_nvme_attach_controller" 00:32:29.965 },{ 00:32:29.965 "params": { 00:32:29.965 "name": "Nvme2", 00:32:29.965 "trtype": "tcp", 00:32:29.965 "traddr": "10.0.0.2", 00:32:29.965 "adrfam": "ipv4", 00:32:29.965 "trsvcid": "4420", 00:32:29.965 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:32:29.965 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:32:29.965 "hdgst": false, 00:32:29.965 "ddgst": false 00:32:29.965 }, 00:32:29.965 "method": "bdev_nvme_attach_controller" 00:32:29.965 }' 00:32:29.965 10:13:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:29.965 10:13:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:29.965 10:13:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:29.965 10:13:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:29.965 10:13:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:32:29.965 10:13:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:29.965 10:13:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:29.965 10:13:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:29.965 10:13:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:32:29.965 10:13:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:29.965 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:29.965 ... 00:32:29.965 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:29.965 ... 00:32:29.965 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:29.965 ... 00:32:29.965 fio-3.35 00:32:29.965 Starting 24 threads 00:32:42.170 00:32:42.170 filename0: (groupid=0, jobs=1): err= 0: pid=98262: Mon Jul 15 10:13:53 2024 00:32:42.170 read: IOPS=208, BW=832KiB/s (852kB/s)(8344KiB/10027msec) 00:32:42.170 slat (usec): min=4, max=6022, avg=15.84, stdev=158.26 00:32:42.170 clat (msec): min=30, max=169, avg=76.72, stdev=22.97 00:32:42.170 lat (msec): min=31, max=169, avg=76.74, stdev=22.97 00:32:42.170 clat percentiles (msec): 00:32:42.170 | 1.00th=[ 37], 5.00th=[ 46], 10.00th=[ 50], 20.00th=[ 58], 00:32:42.170 | 30.00th=[ 63], 40.00th=[ 67], 50.00th=[ 71], 60.00th=[ 83], 00:32:42.170 | 70.00th=[ 87], 80.00th=[ 94], 90.00th=[ 109], 95.00th=[ 121], 00:32:42.170 | 99.00th=[ 144], 99.50th=[ 150], 99.90th=[ 169], 99.95th=[ 169], 00:32:42.170 | 99.99th=[ 169] 00:32:42.170 bw ( KiB/s): min= 584, max= 1072, per=3.73%, avg=830.40, stdev=136.10, samples=20 00:32:42.170 iops : min= 146, max= 268, avg=207.60, stdev=34.03, samples=20 00:32:42.170 lat (msec) : 50=10.31%, 100=76.56%, 250=13.14% 00:32:42.170 cpu : usr=38.12%, sys=0.43%, ctx=1058, majf=0, minf=9 00:32:42.170 IO depths : 1=2.4%, 2=5.3%, 4=15.1%, 8=66.6%, 16=10.6%, 32=0.0%, >=64=0.0% 00:32:42.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.170 complete : 0=0.0%, 4=91.3%, 8=3.6%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.170 issued rwts: total=2086,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.170 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:42.170 filename0: (groupid=0, jobs=1): err= 0: pid=98263: Mon Jul 15 10:13:53 2024 00:32:42.170 read: IOPS=260, BW=1043KiB/s (1068kB/s)(10.3MiB/10068msec) 00:32:42.170 slat (usec): min=6, max=8019, avg=21.52, stdev=281.48 00:32:42.170 clat (usec): min=1440, max=154750, avg=61074.90, stdev=27410.06 00:32:42.170 lat (usec): min=1456, max=154765, avg=61096.42, stdev=27414.78 00:32:42.170 clat percentiles (usec): 00:32:42.170 | 1.00th=[ 1598], 5.00th=[ 4817], 10.00th=[ 35914], 20.00th=[ 43779], 00:32:42.170 | 30.00th=[ 47973], 40.00th=[ 55313], 50.00th=[ 59507], 60.00th=[ 62129], 00:32:42.170 | 70.00th=[ 69731], 80.00th=[ 81265], 90.00th=[ 98042], 95.00th=[112722], 00:32:42.170 | 99.00th=[131597], 99.50th=[135267], 99.90th=[154141], 99.95th=[154141], 00:32:42.170 | 99.99th=[154141] 00:32:42.170 bw ( KiB/s): min= 592, max= 2693, per=4.69%, avg=1044.55, stdev=430.46, samples=20 00:32:42.171 iops : min= 148, max= 673, avg=261.05, stdev=107.57, samples=20 00:32:42.171 lat (msec) : 2=2.02%, 4=2.25%, 10=3.05%, 50=26.24%, 100=58.07% 00:32:42.171 lat (msec) : 250=8.38% 00:32:42.171 cpu : usr=38.90%, sys=0.55%, ctx=1087, majf=0, minf=0 00:32:42.171 IO depths : 1=1.3%, 2=2.9%, 4=11.2%, 8=72.8%, 16=11.8%, 32=0.0%, >=64=0.0% 00:32:42.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.171 complete : 0=0.0%, 4=90.2%, 8=4.8%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.171 issued rwts: total=2626,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.171 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:42.171 filename0: (groupid=0, jobs=1): err= 0: pid=98264: Mon Jul 15 10:13:53 2024 00:32:42.171 read: IOPS=223, BW=895KiB/s (917kB/s)(8996KiB/10048msec) 00:32:42.171 slat (usec): min=6, max=8024, avg=20.86, stdev=292.33 00:32:42.171 clat (msec): min=29, max=154, avg=71.29, stdev=21.07 00:32:42.171 lat (msec): min=29, max=154, avg=71.31, stdev=21.06 00:32:42.171 clat percentiles (msec): 00:32:42.171 | 1.00th=[ 35], 5.00th=[ 42], 10.00th=[ 46], 20.00th=[ 56], 00:32:42.171 | 30.00th=[ 61], 40.00th=[ 62], 50.00th=[ 68], 60.00th=[ 72], 00:32:42.171 | 70.00th=[ 81], 80.00th=[ 89], 90.00th=[ 101], 95.00th=[ 112], 00:32:42.171 | 99.00th=[ 127], 99.50th=[ 128], 99.90th=[ 155], 99.95th=[ 155], 00:32:42.171 | 99.99th=[ 155] 00:32:42.171 bw ( KiB/s): min= 600, max= 1120, per=4.01%, avg=893.25, stdev=123.10, samples=20 00:32:42.171 iops : min= 150, max= 280, avg=223.30, stdev=30.77, samples=20 00:32:42.171 lat (msec) : 50=15.43%, 100=74.74%, 250=9.83% 00:32:42.171 cpu : usr=33.83%, sys=0.48%, ctx=1205, majf=0, minf=9 00:32:42.171 IO depths : 1=1.2%, 2=2.6%, 4=10.2%, 8=73.9%, 16=12.0%, 32=0.0%, >=64=0.0% 00:32:42.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.171 complete : 0=0.0%, 4=89.8%, 8=5.4%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.171 issued rwts: total=2249,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.171 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:42.171 filename0: (groupid=0, jobs=1): err= 0: pid=98265: Mon Jul 15 10:13:53 2024 00:32:42.171 read: IOPS=216, BW=867KiB/s (888kB/s)(8684KiB/10019msec) 00:32:42.171 slat (usec): min=3, max=4019, avg=14.22, stdev=121.71 00:32:42.171 clat (msec): min=33, max=131, avg=73.75, stdev=21.02 00:32:42.171 lat (msec): min=33, max=132, avg=73.76, stdev=21.02 00:32:42.171 clat percentiles (msec): 00:32:42.171 | 1.00th=[ 35], 5.00th=[ 45], 10.00th=[ 51], 20.00th=[ 59], 00:32:42.171 | 30.00th=[ 61], 40.00th=[ 64], 50.00th=[ 70], 60.00th=[ 72], 00:32:42.171 | 70.00th=[ 84], 80.00th=[ 93], 90.00th=[ 103], 95.00th=[ 118], 00:32:42.171 | 99.00th=[ 131], 99.50th=[ 132], 99.90th=[ 132], 99.95th=[ 132], 00:32:42.171 | 99.99th=[ 132] 00:32:42.171 bw ( KiB/s): min= 560, max= 1248, per=3.84%, avg=856.00, stdev=154.46, samples=19 00:32:42.171 iops : min= 140, max= 312, avg=214.00, stdev=38.61, samples=19 00:32:42.171 lat (msec) : 50=10.04%, 100=78.12%, 250=11.84% 00:32:42.171 cpu : usr=41.44%, sys=0.66%, ctx=1200, majf=0, minf=9 00:32:42.171 IO depths : 1=2.4%, 2=5.4%, 4=14.4%, 8=66.6%, 16=11.2%, 32=0.0%, >=64=0.0% 00:32:42.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.171 complete : 0=0.0%, 4=91.6%, 8=3.9%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.171 issued rwts: total=2171,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.171 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:42.171 filename0: (groupid=0, jobs=1): err= 0: pid=98266: Mon Jul 15 10:13:53 2024 00:32:42.171 read: IOPS=279, BW=1119KiB/s (1146kB/s)(11.0MiB/10048msec) 00:32:42.171 slat (usec): min=6, max=8016, avg=21.54, stdev=245.05 00:32:42.171 clat (msec): min=27, max=127, avg=57.06, stdev=16.46 00:32:42.171 lat (msec): min=27, max=127, avg=57.08, stdev=16.46 00:32:42.171 clat percentiles (msec): 00:32:42.171 | 1.00th=[ 32], 5.00th=[ 36], 10.00th=[ 39], 20.00th=[ 42], 00:32:42.171 | 30.00th=[ 46], 40.00th=[ 50], 50.00th=[ 55], 60.00th=[ 61], 00:32:42.171 | 70.00th=[ 65], 80.00th=[ 70], 90.00th=[ 80], 95.00th=[ 85], 00:32:42.171 | 99.00th=[ 108], 99.50th=[ 115], 99.90th=[ 128], 99.95th=[ 128], 00:32:42.171 | 99.99th=[ 128] 00:32:42.171 bw ( KiB/s): min= 896, max= 1328, per=5.02%, avg=1118.00, stdev=133.16, samples=20 00:32:42.171 iops : min= 224, max= 332, avg=279.50, stdev=33.29, samples=20 00:32:42.171 lat (msec) : 50=42.65%, 100=56.03%, 250=1.32% 00:32:42.171 cpu : usr=45.04%, sys=0.74%, ctx=1434, majf=0, minf=9 00:32:42.171 IO depths : 1=0.4%, 2=1.0%, 4=6.7%, 8=78.7%, 16=13.3%, 32=0.0%, >=64=0.0% 00:32:42.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.171 complete : 0=0.0%, 4=89.3%, 8=6.4%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.171 issued rwts: total=2811,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.171 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:42.171 filename0: (groupid=0, jobs=1): err= 0: pid=98267: Mon Jul 15 10:13:53 2024 00:32:42.171 read: IOPS=242, BW=972KiB/s (995kB/s)(9764KiB/10047msec) 00:32:42.171 slat (usec): min=5, max=8038, avg=18.87, stdev=243.40 00:32:42.171 clat (msec): min=8, max=154, avg=65.62, stdev=20.70 00:32:42.171 lat (msec): min=8, max=154, avg=65.63, stdev=20.71 00:32:42.171 clat percentiles (msec): 00:32:42.171 | 1.00th=[ 19], 5.00th=[ 39], 10.00th=[ 42], 20.00th=[ 48], 00:32:42.171 | 30.00th=[ 52], 40.00th=[ 61], 50.00th=[ 63], 60.00th=[ 70], 00:32:42.171 | 70.00th=[ 73], 80.00th=[ 83], 90.00th=[ 94], 95.00th=[ 104], 00:32:42.171 | 99.00th=[ 130], 99.50th=[ 131], 99.90th=[ 155], 99.95th=[ 155], 00:32:42.171 | 99.99th=[ 155] 00:32:42.171 bw ( KiB/s): min= 640, max= 1328, per=4.36%, avg=970.00, stdev=165.38, samples=20 00:32:42.171 iops : min= 160, max= 332, avg=242.50, stdev=41.35, samples=20 00:32:42.171 lat (msec) : 10=0.29%, 20=1.02%, 50=26.14%, 100=66.82%, 250=5.74% 00:32:42.171 cpu : usr=36.11%, sys=0.44%, ctx=954, majf=0, minf=9 00:32:42.171 IO depths : 1=1.1%, 2=2.4%, 4=9.9%, 8=74.1%, 16=12.5%, 32=0.0%, >=64=0.0% 00:32:42.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.171 complete : 0=0.0%, 4=89.8%, 8=5.6%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.171 issued rwts: total=2441,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.171 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:42.171 filename0: (groupid=0, jobs=1): err= 0: pid=98268: Mon Jul 15 10:13:53 2024 00:32:42.171 read: IOPS=236, BW=946KiB/s (969kB/s)(9484KiB/10026msec) 00:32:42.171 slat (usec): min=6, max=11021, avg=28.98, stdev=398.89 00:32:42.171 clat (msec): min=19, max=156, avg=67.38, stdev=19.93 00:32:42.171 lat (msec): min=19, max=156, avg=67.41, stdev=19.95 00:32:42.171 clat percentiles (msec): 00:32:42.171 | 1.00th=[ 34], 5.00th=[ 39], 10.00th=[ 46], 20.00th=[ 48], 00:32:42.171 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 64], 60.00th=[ 71], 00:32:42.171 | 70.00th=[ 73], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 107], 00:32:42.171 | 99.00th=[ 122], 99.50th=[ 123], 99.90th=[ 157], 99.95th=[ 157], 00:32:42.171 | 99.99th=[ 157] 00:32:42.171 bw ( KiB/s): min= 736, max= 1328, per=4.25%, avg=946.00, stdev=160.27, samples=20 00:32:42.171 iops : min= 184, max= 332, avg=236.50, stdev=40.07, samples=20 00:32:42.171 lat (msec) : 20=0.67%, 50=21.76%, 100=70.86%, 250=6.71% 00:32:42.171 cpu : usr=32.70%, sys=0.47%, ctx=848, majf=0, minf=9 00:32:42.171 IO depths : 1=1.1%, 2=2.4%, 4=9.7%, 8=74.3%, 16=12.6%, 32=0.0%, >=64=0.0% 00:32:42.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.171 complete : 0=0.0%, 4=89.8%, 8=5.7%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.171 issued rwts: total=2371,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.171 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:42.171 filename0: (groupid=0, jobs=1): err= 0: pid=98269: Mon Jul 15 10:13:53 2024 00:32:42.171 read: IOPS=226, BW=906KiB/s (928kB/s)(9100KiB/10040msec) 00:32:42.171 slat (usec): min=6, max=3370, avg=12.10, stdev=70.58 00:32:42.171 clat (msec): min=31, max=135, avg=70.53, stdev=19.75 00:32:42.171 lat (msec): min=31, max=135, avg=70.54, stdev=19.75 00:32:42.171 clat percentiles (msec): 00:32:42.171 | 1.00th=[ 35], 5.00th=[ 40], 10.00th=[ 46], 20.00th=[ 57], 00:32:42.171 | 30.00th=[ 60], 40.00th=[ 63], 50.00th=[ 65], 60.00th=[ 72], 00:32:42.171 | 70.00th=[ 84], 80.00th=[ 89], 90.00th=[ 96], 95.00th=[ 104], 00:32:42.171 | 99.00th=[ 125], 99.50th=[ 128], 99.90th=[ 136], 99.95th=[ 136], 00:32:42.171 | 99.99th=[ 136] 00:32:42.171 bw ( KiB/s): min= 640, max= 1200, per=4.05%, avg=903.00, stdev=132.97, samples=20 00:32:42.171 iops : min= 160, max= 300, avg=225.75, stdev=33.24, samples=20 00:32:42.171 lat (msec) : 50=14.29%, 100=78.81%, 250=6.90% 00:32:42.171 cpu : usr=38.70%, sys=0.55%, ctx=1270, majf=0, minf=9 00:32:42.171 IO depths : 1=2.4%, 2=5.4%, 4=15.3%, 8=66.2%, 16=10.6%, 32=0.0%, >=64=0.0% 00:32:42.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.172 complete : 0=0.0%, 4=91.5%, 8=3.4%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.172 issued rwts: total=2275,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.172 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:42.172 filename1: (groupid=0, jobs=1): err= 0: pid=98270: Mon Jul 15 10:13:53 2024 00:32:42.172 read: IOPS=254, BW=1018KiB/s (1043kB/s)(10.00MiB/10052msec) 00:32:42.172 slat (usec): min=6, max=4023, avg=12.33, stdev=79.44 00:32:42.172 clat (msec): min=21, max=129, avg=62.69, stdev=16.98 00:32:42.172 lat (msec): min=21, max=129, avg=62.70, stdev=16.98 00:32:42.172 clat percentiles (msec): 00:32:42.172 | 1.00th=[ 31], 5.00th=[ 40], 10.00th=[ 43], 20.00th=[ 48], 00:32:42.172 | 30.00th=[ 54], 40.00th=[ 59], 50.00th=[ 62], 60.00th=[ 65], 00:32:42.172 | 70.00th=[ 70], 80.00th=[ 74], 90.00th=[ 86], 95.00th=[ 94], 00:32:42.172 | 99.00th=[ 107], 99.50th=[ 108], 99.90th=[ 130], 99.95th=[ 130], 00:32:42.172 | 99.99th=[ 130] 00:32:42.172 bw ( KiB/s): min= 816, max= 1328, per=4.57%, avg=1017.20, stdev=124.08, samples=20 00:32:42.172 iops : min= 204, max= 332, avg=254.30, stdev=31.02, samples=20 00:32:42.172 lat (msec) : 50=28.14%, 100=68.58%, 250=3.28% 00:32:42.172 cpu : usr=39.86%, sys=0.44%, ctx=1119, majf=0, minf=9 00:32:42.172 IO depths : 1=0.4%, 2=0.9%, 4=6.1%, 8=79.0%, 16=13.6%, 32=0.0%, >=64=0.0% 00:32:42.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.172 complete : 0=0.0%, 4=89.2%, 8=6.7%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.172 issued rwts: total=2559,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.172 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:42.172 filename1: (groupid=0, jobs=1): err= 0: pid=98271: Mon Jul 15 10:13:53 2024 00:32:42.172 read: IOPS=206, BW=827KiB/s (847kB/s)(8304KiB/10041msec) 00:32:42.172 slat (usec): min=6, max=8009, avg=24.89, stdev=292.49 00:32:42.172 clat (msec): min=28, max=152, avg=77.02, stdev=22.45 00:32:42.172 lat (msec): min=28, max=152, avg=77.04, stdev=22.45 00:32:42.172 clat percentiles (msec): 00:32:42.172 | 1.00th=[ 41], 5.00th=[ 48], 10.00th=[ 55], 20.00th=[ 61], 00:32:42.172 | 30.00th=[ 63], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 80], 00:32:42.172 | 70.00th=[ 85], 80.00th=[ 94], 90.00th=[ 108], 95.00th=[ 126], 00:32:42.172 | 99.00th=[ 144], 99.50th=[ 146], 99.90th=[ 153], 99.95th=[ 153], 00:32:42.172 | 99.99th=[ 153] 00:32:42.172 bw ( KiB/s): min= 512, max= 1024, per=3.70%, avg=823.60, stdev=133.32, samples=20 00:32:42.172 iops : min= 128, max= 256, avg=205.90, stdev=33.33, samples=20 00:32:42.172 lat (msec) : 50=8.24%, 100=78.13%, 250=13.63% 00:32:42.172 cpu : usr=37.58%, sys=0.56%, ctx=1058, majf=0, minf=9 00:32:42.172 IO depths : 1=2.2%, 2=4.9%, 4=14.7%, 8=67.2%, 16=11.0%, 32=0.0%, >=64=0.0% 00:32:42.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.172 complete : 0=0.0%, 4=90.9%, 8=4.1%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.172 issued rwts: total=2076,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.172 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:42.172 filename1: (groupid=0, jobs=1): err= 0: pid=98272: Mon Jul 15 10:13:53 2024 00:32:42.172 read: IOPS=214, BW=857KiB/s (878kB/s)(8604KiB/10037msec) 00:32:42.172 slat (usec): min=4, max=8020, avg=14.75, stdev=172.76 00:32:42.172 clat (msec): min=25, max=143, avg=74.50, stdev=22.89 00:32:42.172 lat (msec): min=25, max=143, avg=74.51, stdev=22.88 00:32:42.172 clat percentiles (msec): 00:32:42.172 | 1.00th=[ 36], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 58], 00:32:42.172 | 30.00th=[ 61], 40.00th=[ 64], 50.00th=[ 68], 60.00th=[ 75], 00:32:42.172 | 70.00th=[ 84], 80.00th=[ 95], 90.00th=[ 110], 95.00th=[ 123], 00:32:42.172 | 99.00th=[ 142], 99.50th=[ 142], 99.90th=[ 144], 99.95th=[ 144], 00:32:42.172 | 99.99th=[ 144] 00:32:42.172 bw ( KiB/s): min= 560, max= 1072, per=3.83%, avg=854.00, stdev=161.88, samples=20 00:32:42.172 iops : min= 140, max= 268, avg=213.50, stdev=40.47, samples=20 00:32:42.172 lat (msec) : 50=11.20%, 100=74.76%, 250=14.04% 00:32:42.172 cpu : usr=36.51%, sys=0.55%, ctx=1154, majf=0, minf=9 00:32:42.172 IO depths : 1=1.6%, 2=3.4%, 4=11.0%, 8=71.5%, 16=12.5%, 32=0.0%, >=64=0.0% 00:32:42.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.172 complete : 0=0.0%, 4=90.5%, 8=5.2%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.172 issued rwts: total=2151,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.172 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:42.172 filename1: (groupid=0, jobs=1): err= 0: pid=98273: Mon Jul 15 10:13:53 2024 00:32:42.172 read: IOPS=284, BW=1139KiB/s (1167kB/s)(11.2MiB/10049msec) 00:32:42.172 slat (usec): min=4, max=483, avg=10.03, stdev= 9.68 00:32:42.172 clat (msec): min=4, max=119, avg=56.02, stdev=19.37 00:32:42.172 lat (msec): min=4, max=119, avg=56.03, stdev=19.37 00:32:42.172 clat percentiles (msec): 00:32:42.172 | 1.00th=[ 5], 5.00th=[ 32], 10.00th=[ 38], 20.00th=[ 42], 00:32:42.172 | 30.00th=[ 46], 40.00th=[ 48], 50.00th=[ 55], 60.00th=[ 59], 00:32:42.172 | 70.00th=[ 64], 80.00th=[ 71], 90.00th=[ 81], 95.00th=[ 93], 00:32:42.172 | 99.00th=[ 110], 99.50th=[ 115], 99.90th=[ 121], 99.95th=[ 121], 00:32:42.172 | 99.99th=[ 121] 00:32:42.172 bw ( KiB/s): min= 864, max= 1792, per=5.12%, avg=1140.35, stdev=200.42, samples=20 00:32:42.172 iops : min= 216, max= 448, avg=285.05, stdev=50.09, samples=20 00:32:42.172 lat (msec) : 10=2.80%, 50=40.81%, 100=53.21%, 250=3.18% 00:32:42.172 cpu : usr=45.86%, sys=0.67%, ctx=1329, majf=0, minf=9 00:32:42.172 IO depths : 1=0.6%, 2=1.4%, 4=7.6%, 8=77.5%, 16=12.9%, 32=0.0%, >=64=0.0% 00:32:42.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.172 complete : 0=0.0%, 4=89.4%, 8=6.1%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.172 issued rwts: total=2862,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.172 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:42.172 filename1: (groupid=0, jobs=1): err= 0: pid=98274: Mon Jul 15 10:13:53 2024 00:32:42.172 read: IOPS=265, BW=1062KiB/s (1088kB/s)(10.4MiB/10030msec) 00:32:42.172 slat (usec): min=5, max=8044, avg=27.81, stdev=360.30 00:32:42.172 clat (msec): min=2, max=142, avg=60.08, stdev=22.17 00:32:42.172 lat (msec): min=2, max=142, avg=60.10, stdev=22.17 00:32:42.172 clat percentiles (msec): 00:32:42.172 | 1.00th=[ 4], 5.00th=[ 34], 10.00th=[ 39], 20.00th=[ 47], 00:32:42.172 | 30.00th=[ 48], 40.00th=[ 53], 50.00th=[ 59], 60.00th=[ 62], 00:32:42.172 | 70.00th=[ 69], 80.00th=[ 72], 90.00th=[ 90], 95.00th=[ 103], 00:32:42.172 | 99.00th=[ 123], 99.50th=[ 127], 99.90th=[ 144], 99.95th=[ 144], 00:32:42.172 | 99.99th=[ 144] 00:32:42.172 bw ( KiB/s): min= 688, max= 1920, per=4.75%, avg=1058.05, stdev=255.87, samples=20 00:32:42.172 iops : min= 172, max= 480, avg=264.45, stdev=63.93, samples=20 00:32:42.172 lat (msec) : 4=1.20%, 10=3.00%, 50=30.08%, 100=60.68%, 250=5.03% 00:32:42.172 cpu : usr=35.13%, sys=0.63%, ctx=1002, majf=0, minf=9 00:32:42.172 IO depths : 1=1.0%, 2=2.3%, 4=8.9%, 8=75.1%, 16=12.8%, 32=0.0%, >=64=0.0% 00:32:42.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.172 complete : 0=0.0%, 4=89.8%, 8=5.8%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.172 issued rwts: total=2663,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.172 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:42.172 filename1: (groupid=0, jobs=1): err= 0: pid=98275: Mon Jul 15 10:13:53 2024 00:32:42.172 read: IOPS=210, BW=841KiB/s (861kB/s)(8428KiB/10023msec) 00:32:42.172 slat (usec): min=5, max=8016, avg=14.53, stdev=174.47 00:32:42.172 clat (msec): min=25, max=161, avg=76.02, stdev=23.52 00:32:42.172 lat (msec): min=25, max=161, avg=76.04, stdev=23.52 00:32:42.172 clat percentiles (msec): 00:32:42.172 | 1.00th=[ 36], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 60], 00:32:42.172 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 71], 60.00th=[ 80], 00:32:42.172 | 70.00th=[ 85], 80.00th=[ 95], 90.00th=[ 108], 95.00th=[ 126], 00:32:42.172 | 99.00th=[ 144], 99.50th=[ 146], 99.90th=[ 163], 99.95th=[ 163], 00:32:42.172 | 99.99th=[ 163] 00:32:42.172 bw ( KiB/s): min= 512, max= 1072, per=3.75%, avg=835.37, stdev=150.12, samples=19 00:32:42.172 iops : min= 128, max= 268, avg=208.84, stdev=37.53, samples=19 00:32:42.172 lat (msec) : 50=12.06%, 100=74.37%, 250=13.57% 00:32:42.172 cpu : usr=32.81%, sys=0.42%, ctx=851, majf=0, minf=9 00:32:42.172 IO depths : 1=2.1%, 2=4.7%, 4=13.8%, 8=68.3%, 16=11.1%, 32=0.0%, >=64=0.0% 00:32:42.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.172 complete : 0=0.0%, 4=90.7%, 8=4.3%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.172 issued rwts: total=2107,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.172 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:42.172 filename1: (groupid=0, jobs=1): err= 0: pid=98276: Mon Jul 15 10:13:53 2024 00:32:42.172 read: IOPS=235, BW=944KiB/s (966kB/s)(9480KiB/10044msec) 00:32:42.172 slat (usec): min=6, max=8022, avg=20.91, stdev=253.20 00:32:42.172 clat (msec): min=31, max=151, avg=67.66, stdev=23.14 00:32:42.172 lat (msec): min=31, max=151, avg=67.68, stdev=23.15 00:32:42.172 clat percentiles (msec): 00:32:42.172 | 1.00th=[ 33], 5.00th=[ 38], 10.00th=[ 40], 20.00th=[ 46], 00:32:42.172 | 30.00th=[ 52], 40.00th=[ 60], 50.00th=[ 64], 60.00th=[ 71], 00:32:42.172 | 70.00th=[ 80], 80.00th=[ 91], 90.00th=[ 101], 95.00th=[ 108], 00:32:42.172 | 99.00th=[ 131], 99.50th=[ 132], 99.90th=[ 153], 99.95th=[ 153], 00:32:42.172 | 99.99th=[ 153] 00:32:42.172 bw ( KiB/s): min= 608, max= 1280, per=4.22%, avg=940.85, stdev=205.82, samples=20 00:32:42.172 iops : min= 152, max= 320, avg=235.20, stdev=51.44, samples=20 00:32:42.172 lat (msec) : 50=28.86%, 100=61.27%, 250=9.87% 00:32:42.172 cpu : usr=36.73%, sys=0.37%, ctx=1028, majf=0, minf=9 00:32:42.172 IO depths : 1=0.5%, 2=1.3%, 4=7.8%, 8=76.8%, 16=13.6%, 32=0.0%, >=64=0.0% 00:32:42.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.172 complete : 0=0.0%, 4=89.3%, 8=6.7%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.172 issued rwts: total=2370,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.172 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:42.172 filename1: (groupid=0, jobs=1): err= 0: pid=98277: Mon Jul 15 10:13:53 2024 00:32:42.172 read: IOPS=225, BW=900KiB/s (922kB/s)(9032KiB/10035msec) 00:32:42.172 slat (usec): min=6, max=6026, avg=16.80, stdev=170.03 00:32:42.172 clat (msec): min=29, max=153, avg=70.95, stdev=22.81 00:32:42.172 lat (msec): min=29, max=153, avg=70.97, stdev=22.81 00:32:42.172 clat percentiles (msec): 00:32:42.172 | 1.00th=[ 33], 5.00th=[ 40], 10.00th=[ 43], 20.00th=[ 52], 00:32:42.172 | 30.00th=[ 59], 40.00th=[ 63], 50.00th=[ 66], 60.00th=[ 72], 00:32:42.172 | 70.00th=[ 83], 80.00th=[ 92], 90.00th=[ 105], 95.00th=[ 112], 00:32:42.172 | 99.00th=[ 129], 99.50th=[ 134], 99.90th=[ 155], 99.95th=[ 155], 00:32:42.172 | 99.99th=[ 155] 00:32:42.172 bw ( KiB/s): min= 616, max= 1280, per=4.02%, avg=896.80, stdev=168.69, samples=20 00:32:42.172 iops : min= 154, max= 320, avg=224.20, stdev=42.17, samples=20 00:32:42.172 lat (msec) : 50=18.78%, 100=69.13%, 250=12.09% 00:32:42.173 cpu : usr=44.00%, sys=0.51%, ctx=1361, majf=0, minf=9 00:32:42.173 IO depths : 1=1.7%, 2=3.5%, 4=10.5%, 8=72.2%, 16=12.1%, 32=0.0%, >=64=0.0% 00:32:42.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.173 complete : 0=0.0%, 4=90.2%, 8=5.3%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.173 issued rwts: total=2258,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.173 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:42.173 filename2: (groupid=0, jobs=1): err= 0: pid=98278: Mon Jul 15 10:13:53 2024 00:32:42.173 read: IOPS=239, BW=959KiB/s (982kB/s)(9636KiB/10048msec) 00:32:42.173 slat (usec): min=6, max=7018, avg=16.61, stdev=190.11 00:32:42.173 clat (msec): min=28, max=141, avg=66.48, stdev=19.84 00:32:42.173 lat (msec): min=28, max=141, avg=66.49, stdev=19.84 00:32:42.173 clat percentiles (msec): 00:32:42.173 | 1.00th=[ 33], 5.00th=[ 40], 10.00th=[ 44], 20.00th=[ 48], 00:32:42.173 | 30.00th=[ 55], 40.00th=[ 61], 50.00th=[ 64], 60.00th=[ 68], 00:32:42.173 | 70.00th=[ 73], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 105], 00:32:42.173 | 99.00th=[ 121], 99.50th=[ 129], 99.90th=[ 142], 99.95th=[ 142], 00:32:42.173 | 99.99th=[ 142] 00:32:42.173 bw ( KiB/s): min= 736, max= 1280, per=4.30%, avg=957.20, stdev=154.04, samples=20 00:32:42.173 iops : min= 184, max= 320, avg=239.30, stdev=38.51, samples=20 00:32:42.173 lat (msec) : 50=23.62%, 100=70.98%, 250=5.40% 00:32:42.173 cpu : usr=38.77%, sys=0.45%, ctx=1320, majf=0, minf=9 00:32:42.173 IO depths : 1=1.2%, 2=2.7%, 4=9.6%, 8=74.1%, 16=12.4%, 32=0.0%, >=64=0.0% 00:32:42.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.173 complete : 0=0.0%, 4=89.7%, 8=5.8%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.173 issued rwts: total=2409,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.173 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:42.173 filename2: (groupid=0, jobs=1): err= 0: pid=98279: Mon Jul 15 10:13:53 2024 00:32:42.173 read: IOPS=210, BW=844KiB/s (864kB/s)(8456KiB/10024msec) 00:32:42.173 slat (usec): min=3, max=8031, avg=21.34, stdev=258.74 00:32:42.173 clat (msec): min=35, max=168, avg=75.55, stdev=23.60 00:32:42.173 lat (msec): min=35, max=168, avg=75.57, stdev=23.60 00:32:42.173 clat percentiles (msec): 00:32:42.173 | 1.00th=[ 37], 5.00th=[ 41], 10.00th=[ 48], 20.00th=[ 56], 00:32:42.173 | 30.00th=[ 63], 40.00th=[ 66], 50.00th=[ 71], 60.00th=[ 81], 00:32:42.173 | 70.00th=[ 87], 80.00th=[ 94], 90.00th=[ 107], 95.00th=[ 125], 00:32:42.173 | 99.00th=[ 140], 99.50th=[ 150], 99.90th=[ 169], 99.95th=[ 169], 00:32:42.173 | 99.99th=[ 169] 00:32:42.173 bw ( KiB/s): min= 600, max= 1088, per=3.76%, avg=838.74, stdev=148.36, samples=19 00:32:42.173 iops : min= 150, max= 272, avg=209.68, stdev=37.09, samples=19 00:32:42.173 lat (msec) : 50=13.72%, 100=72.75%, 250=13.53% 00:32:42.173 cpu : usr=43.02%, sys=0.53%, ctx=1603, majf=0, minf=9 00:32:42.173 IO depths : 1=2.6%, 2=5.7%, 4=15.2%, 8=66.0%, 16=10.5%, 32=0.0%, >=64=0.0% 00:32:42.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.173 complete : 0=0.0%, 4=91.3%, 8=3.6%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.173 issued rwts: total=2114,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.173 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:42.173 filename2: (groupid=0, jobs=1): err= 0: pid=98280: Mon Jul 15 10:13:53 2024 00:32:42.173 read: IOPS=208, BW=834KiB/s (854kB/s)(8360KiB/10027msec) 00:32:42.173 slat (usec): min=4, max=8016, avg=17.68, stdev=219.00 00:32:42.173 clat (msec): min=34, max=155, avg=76.65, stdev=24.20 00:32:42.173 lat (msec): min=34, max=155, avg=76.67, stdev=24.19 00:32:42.173 clat percentiles (msec): 00:32:42.173 | 1.00th=[ 40], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 58], 00:32:42.173 | 30.00th=[ 62], 40.00th=[ 64], 50.00th=[ 72], 60.00th=[ 80], 00:32:42.173 | 70.00th=[ 87], 80.00th=[ 95], 90.00th=[ 114], 95.00th=[ 128], 00:32:42.173 | 99.00th=[ 142], 99.50th=[ 144], 99.90th=[ 157], 99.95th=[ 157], 00:32:42.173 | 99.99th=[ 157] 00:32:42.173 bw ( KiB/s): min= 512, max= 1152, per=3.72%, avg=829.65, stdev=151.27, samples=20 00:32:42.173 iops : min= 128, max= 288, avg=207.40, stdev=37.81, samples=20 00:32:42.173 lat (msec) : 50=12.34%, 100=72.73%, 250=14.93% 00:32:42.173 cpu : usr=41.92%, sys=0.60%, ctx=1204, majf=0, minf=9 00:32:42.173 IO depths : 1=3.1%, 2=6.6%, 4=16.6%, 8=63.7%, 16=10.0%, 32=0.0%, >=64=0.0% 00:32:42.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.173 complete : 0=0.0%, 4=91.8%, 8=3.0%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.173 issued rwts: total=2090,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.173 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:42.173 filename2: (groupid=0, jobs=1): err= 0: pid=98281: Mon Jul 15 10:13:53 2024 00:32:42.173 read: IOPS=248, BW=996KiB/s (1020kB/s)(9.79MiB/10072msec) 00:32:42.173 slat (usec): min=5, max=8025, avg=13.86, stdev=160.13 00:32:42.173 clat (msec): min=5, max=154, avg=64.06, stdev=25.08 00:32:42.173 lat (msec): min=5, max=154, avg=64.08, stdev=25.08 00:32:42.173 clat percentiles (msec): 00:32:42.173 | 1.00th=[ 9], 5.00th=[ 35], 10.00th=[ 37], 20.00th=[ 47], 00:32:42.173 | 30.00th=[ 48], 40.00th=[ 56], 50.00th=[ 61], 60.00th=[ 66], 00:32:42.173 | 70.00th=[ 72], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 118], 00:32:42.173 | 99.00th=[ 132], 99.50th=[ 138], 99.90th=[ 155], 99.95th=[ 155], 00:32:42.173 | 99.99th=[ 155] 00:32:42.173 bw ( KiB/s): min= 512, max= 1667, per=4.47%, avg=996.20, stdev=234.35, samples=20 00:32:42.173 iops : min= 128, max= 416, avg=249.00, stdev=58.46, samples=20 00:32:42.173 lat (msec) : 10=1.91%, 20=0.64%, 50=33.19%, 100=57.00%, 250=7.26% 00:32:42.173 cpu : usr=37.45%, sys=0.47%, ctx=951, majf=0, minf=9 00:32:42.173 IO depths : 1=1.2%, 2=2.5%, 4=10.5%, 8=73.7%, 16=12.1%, 32=0.0%, >=64=0.0% 00:32:42.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.173 complete : 0=0.0%, 4=90.0%, 8=5.2%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.173 issued rwts: total=2507,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.173 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:42.173 filename2: (groupid=0, jobs=1): err= 0: pid=98282: Mon Jul 15 10:13:53 2024 00:32:42.173 read: IOPS=230, BW=920KiB/s (942kB/s)(9248KiB/10049msec) 00:32:42.173 slat (nsec): min=6878, max=32971, avg=10740.05, stdev=4114.15 00:32:42.173 clat (msec): min=26, max=134, avg=69.36, stdev=20.86 00:32:42.173 lat (msec): min=26, max=134, avg=69.37, stdev=20.86 00:32:42.173 clat percentiles (msec): 00:32:42.173 | 1.00th=[ 34], 5.00th=[ 37], 10.00th=[ 46], 20.00th=[ 50], 00:32:42.173 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 69], 60.00th=[ 72], 00:32:42.173 | 70.00th=[ 82], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 110], 00:32:42.173 | 99.00th=[ 129], 99.50th=[ 131], 99.90th=[ 136], 99.95th=[ 136], 00:32:42.173 | 99.99th=[ 136] 00:32:42.173 bw ( KiB/s): min= 688, max= 1168, per=4.12%, avg=918.40, stdev=128.07, samples=20 00:32:42.173 iops : min= 172, max= 292, avg=229.60, stdev=32.02, samples=20 00:32:42.173 lat (msec) : 50=21.15%, 100=70.80%, 250=8.04% 00:32:42.173 cpu : usr=32.72%, sys=0.51%, ctx=853, majf=0, minf=9 00:32:42.173 IO depths : 1=1.4%, 2=3.0%, 4=11.5%, 8=72.0%, 16=12.1%, 32=0.0%, >=64=0.0% 00:32:42.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.173 complete : 0=0.0%, 4=90.5%, 8=4.8%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.173 issued rwts: total=2312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.173 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:42.173 filename2: (groupid=0, jobs=1): err= 0: pid=98283: Mon Jul 15 10:13:53 2024 00:32:42.173 read: IOPS=202, BW=811KiB/s (831kB/s)(8132KiB/10024msec) 00:32:42.173 slat (usec): min=3, max=4013, avg=14.96, stdev=125.46 00:32:42.173 clat (msec): min=26, max=150, avg=78.76, stdev=21.51 00:32:42.173 lat (msec): min=26, max=150, avg=78.78, stdev=21.51 00:32:42.173 clat percentiles (msec): 00:32:42.173 | 1.00th=[ 39], 5.00th=[ 49], 10.00th=[ 56], 20.00th=[ 61], 00:32:42.173 | 30.00th=[ 64], 40.00th=[ 70], 50.00th=[ 75], 60.00th=[ 83], 00:32:42.173 | 70.00th=[ 90], 80.00th=[ 95], 90.00th=[ 108], 95.00th=[ 123], 00:32:42.173 | 99.00th=[ 138], 99.50th=[ 142], 99.90th=[ 150], 99.95th=[ 150], 00:32:42.173 | 99.99th=[ 150] 00:32:42.173 bw ( KiB/s): min= 600, max= 1024, per=3.62%, avg=806.79, stdev=115.71, samples=19 00:32:42.173 iops : min= 150, max= 256, avg=201.68, stdev=28.93, samples=19 00:32:42.173 lat (msec) : 50=6.39%, 100=80.08%, 250=13.53% 00:32:42.173 cpu : usr=38.00%, sys=0.47%, ctx=1093, majf=0, minf=9 00:32:42.173 IO depths : 1=1.7%, 2=4.1%, 4=13.7%, 8=68.9%, 16=11.7%, 32=0.0%, >=64=0.0% 00:32:42.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.173 complete : 0=0.0%, 4=90.8%, 8=4.4%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.173 issued rwts: total=2033,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.173 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:42.173 filename2: (groupid=0, jobs=1): err= 0: pid=98284: Mon Jul 15 10:13:53 2024 00:32:42.173 read: IOPS=211, BW=844KiB/s (865kB/s)(8464KiB/10023msec) 00:32:42.173 slat (usec): min=5, max=4017, avg=16.32, stdev=150.62 00:32:42.173 clat (msec): min=27, max=155, avg=75.68, stdev=22.64 00:32:42.173 lat (msec): min=27, max=155, avg=75.70, stdev=22.64 00:32:42.173 clat percentiles (msec): 00:32:42.173 | 1.00th=[ 36], 5.00th=[ 42], 10.00th=[ 51], 20.00th=[ 59], 00:32:42.173 | 30.00th=[ 63], 40.00th=[ 66], 50.00th=[ 70], 60.00th=[ 81], 00:32:42.173 | 70.00th=[ 86], 80.00th=[ 92], 90.00th=[ 106], 95.00th=[ 123], 00:32:42.173 | 99.00th=[ 138], 99.50th=[ 148], 99.90th=[ 157], 99.95th=[ 157], 00:32:42.173 | 99.99th=[ 157] 00:32:42.173 bw ( KiB/s): min= 640, max= 1040, per=3.75%, avg=836.21, stdev=109.31, samples=19 00:32:42.173 iops : min= 160, max= 260, avg=209.05, stdev=27.33, samples=19 00:32:42.173 lat (msec) : 50=9.22%, 100=78.54%, 250=12.24% 00:32:42.173 cpu : usr=38.20%, sys=0.71%, ctx=1347, majf=0, minf=9 00:32:42.173 IO depths : 1=2.6%, 2=5.9%, 4=15.9%, 8=65.2%, 16=10.4%, 32=0.0%, >=64=0.0% 00:32:42.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.173 complete : 0=0.0%, 4=91.4%, 8=3.5%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.173 issued rwts: total=2116,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.173 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:42.173 filename2: (groupid=0, jobs=1): err= 0: pid=98285: Mon Jul 15 10:13:53 2024 00:32:42.173 read: IOPS=241, BW=965KiB/s (988kB/s)(9672KiB/10026msec) 00:32:42.173 slat (usec): min=6, max=4492, avg=14.19, stdev=122.39 00:32:42.173 clat (msec): min=26, max=183, avg=66.19, stdev=22.28 00:32:42.173 lat (msec): min=26, max=183, avg=66.20, stdev=22.28 00:32:42.173 clat percentiles (msec): 00:32:42.173 | 1.00th=[ 34], 5.00th=[ 40], 10.00th=[ 42], 20.00th=[ 47], 00:32:42.173 | 30.00th=[ 54], 40.00th=[ 59], 50.00th=[ 64], 60.00th=[ 66], 00:32:42.173 | 70.00th=[ 73], 80.00th=[ 84], 90.00th=[ 97], 95.00th=[ 108], 00:32:42.173 | 99.00th=[ 138], 99.50th=[ 140], 99.90th=[ 184], 99.95th=[ 184], 00:32:42.173 | 99.99th=[ 184] 00:32:42.173 bw ( KiB/s): min= 640, max= 1296, per=4.32%, avg=963.60, stdev=186.18, samples=20 00:32:42.173 iops : min= 160, max= 324, avg=240.90, stdev=46.55, samples=20 00:32:42.173 lat (msec) : 50=27.83%, 100=65.01%, 250=7.15% 00:32:42.173 cpu : usr=43.62%, sys=0.52%, ctx=1354, majf=0, minf=9 00:32:42.173 IO depths : 1=1.6%, 2=3.4%, 4=11.0%, 8=72.1%, 16=11.9%, 32=0.0%, >=64=0.0% 00:32:42.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.173 complete : 0=0.0%, 4=90.3%, 8=5.0%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.174 issued rwts: total=2418,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.174 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:42.174 00:32:42.174 Run status group 0 (all jobs): 00:32:42.174 READ: bw=21.7MiB/s (22.8MB/s), 811KiB/s-1139KiB/s (831kB/s-1167kB/s), io=219MiB (230MB), run=10019-10072msec 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:42.174 bdev_null0 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:42.174 [2024-07-15 10:13:54.263633] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:42.174 bdev_null1 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:42.174 { 00:32:42.174 "params": { 00:32:42.174 "name": "Nvme$subsystem", 00:32:42.174 "trtype": "$TEST_TRANSPORT", 00:32:42.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:42.174 "adrfam": "ipv4", 00:32:42.174 "trsvcid": "$NVMF_PORT", 00:32:42.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:42.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:42.174 "hdgst": ${hdgst:-false}, 00:32:42.174 "ddgst": ${ddgst:-false} 00:32:42.174 }, 00:32:42.174 "method": "bdev_nvme_attach_controller" 00:32:42.174 } 00:32:42.174 EOF 00:32:42.174 )") 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:42.174 10:13:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:42.174 { 00:32:42.174 "params": { 00:32:42.174 "name": "Nvme$subsystem", 00:32:42.174 "trtype": "$TEST_TRANSPORT", 00:32:42.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:42.174 "adrfam": "ipv4", 00:32:42.174 "trsvcid": "$NVMF_PORT", 00:32:42.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:42.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:42.174 "hdgst": ${hdgst:-false}, 00:32:42.174 "ddgst": ${ddgst:-false} 00:32:42.174 }, 00:32:42.175 "method": "bdev_nvme_attach_controller" 00:32:42.175 } 00:32:42.175 EOF 00:32:42.175 )") 00:32:42.175 10:13:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:42.175 10:13:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:42.175 10:13:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:32:42.175 10:13:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:32:42.175 10:13:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:32:42.175 10:13:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:42.175 "params": { 00:32:42.175 "name": "Nvme0", 00:32:42.175 "trtype": "tcp", 00:32:42.175 "traddr": "10.0.0.2", 00:32:42.175 "adrfam": "ipv4", 00:32:42.175 "trsvcid": "4420", 00:32:42.175 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:42.175 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:42.175 "hdgst": false, 00:32:42.175 "ddgst": false 00:32:42.175 }, 00:32:42.175 "method": "bdev_nvme_attach_controller" 00:32:42.175 },{ 00:32:42.175 "params": { 00:32:42.175 "name": "Nvme1", 00:32:42.175 "trtype": "tcp", 00:32:42.175 "traddr": "10.0.0.2", 00:32:42.175 "adrfam": "ipv4", 00:32:42.175 "trsvcid": "4420", 00:32:42.175 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:42.175 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:42.175 "hdgst": false, 00:32:42.175 "ddgst": false 00:32:42.175 }, 00:32:42.175 "method": "bdev_nvme_attach_controller" 00:32:42.175 }' 00:32:42.175 10:13:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:42.175 10:13:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:42.175 10:13:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:42.175 10:13:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:42.175 10:13:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:32:42.175 10:13:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:42.175 10:13:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:42.175 10:13:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:42.175 10:13:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:32:42.175 10:13:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:42.175 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:32:42.175 ... 00:32:42.175 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:32:42.175 ... 00:32:42.175 fio-3.35 00:32:42.175 Starting 4 threads 00:32:47.457 00:32:47.457 filename0: (groupid=0, jobs=1): err= 0: pid=98417: Mon Jul 15 10:14:00 2024 00:32:47.457 read: IOPS=2420, BW=18.9MiB/s (19.8MB/s)(94.6MiB/5003msec) 00:32:47.457 slat (nsec): min=6151, max=45797, avg=12355.70, stdev=3920.76 00:32:47.457 clat (usec): min=1075, max=4641, avg=3252.96, stdev=265.83 00:32:47.457 lat (usec): min=1090, max=4655, avg=3265.32, stdev=265.89 00:32:47.457 clat percentiles (usec): 00:32:47.457 | 1.00th=[ 2409], 5.00th=[ 3032], 10.00th=[ 3064], 20.00th=[ 3097], 00:32:47.457 | 30.00th=[ 3130], 40.00th=[ 3163], 50.00th=[ 3228], 60.00th=[ 3294], 00:32:47.457 | 70.00th=[ 3359], 80.00th=[ 3458], 90.00th=[ 3523], 95.00th=[ 3621], 00:32:47.457 | 99.00th=[ 4015], 99.50th=[ 4178], 99.90th=[ 4359], 99.95th=[ 4424], 00:32:47.457 | 99.99th=[ 4621] 00:32:47.458 bw ( KiB/s): min=19072, max=19840, per=25.17%, avg=19470.22, stdev=303.20, samples=9 00:32:47.458 iops : min= 2384, max= 2480, avg=2433.78, stdev=37.90, samples=9 00:32:47.458 lat (msec) : 2=0.28%, 4=98.67%, 10=1.05% 00:32:47.458 cpu : usr=96.42%, sys=2.68%, ctx=3, majf=0, minf=0 00:32:47.458 IO depths : 1=8.0%, 2=25.0%, 4=50.0%, 8=17.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:47.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.458 complete : 0=0.0%, 4=89.3%, 8=10.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.458 issued rwts: total=12111,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:47.458 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:47.458 filename0: (groupid=0, jobs=1): err= 0: pid=98418: Mon Jul 15 10:14:00 2024 00:32:47.458 read: IOPS=2417, BW=18.9MiB/s (19.8MB/s)(94.4MiB/5001msec) 00:32:47.458 slat (nsec): min=5789, max=41755, avg=13956.45, stdev=3675.53 00:32:47.458 clat (usec): min=1224, max=6484, avg=3246.33, stdev=371.48 00:32:47.458 lat (usec): min=1236, max=6498, avg=3260.28, stdev=371.53 00:32:47.458 clat percentiles (usec): 00:32:47.458 | 1.00th=[ 1958], 5.00th=[ 2999], 10.00th=[ 3032], 20.00th=[ 3064], 00:32:47.458 | 30.00th=[ 3097], 40.00th=[ 3163], 50.00th=[ 3195], 60.00th=[ 3261], 00:32:47.458 | 70.00th=[ 3359], 80.00th=[ 3425], 90.00th=[ 3523], 95.00th=[ 3621], 00:32:47.458 | 99.00th=[ 4948], 99.50th=[ 5211], 99.90th=[ 5800], 99.95th=[ 5866], 00:32:47.458 | 99.99th=[ 6325] 00:32:47.458 bw ( KiB/s): min=18816, max=19840, per=25.11%, avg=19427.56, stdev=366.50, samples=9 00:32:47.458 iops : min= 2352, max= 2480, avg=2428.44, stdev=45.81, samples=9 00:32:47.458 lat (msec) : 2=1.07%, 4=96.69%, 10=2.24% 00:32:47.458 cpu : usr=96.38%, sys=2.76%, ctx=20, majf=0, minf=9 00:32:47.458 IO depths : 1=6.8%, 2=25.0%, 4=50.0%, 8=18.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:47.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.458 complete : 0=0.0%, 4=89.4%, 8=10.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.458 issued rwts: total=12088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:47.458 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:47.458 filename1: (groupid=0, jobs=1): err= 0: pid=98419: Mon Jul 15 10:14:00 2024 00:32:47.458 read: IOPS=2417, BW=18.9MiB/s (19.8MB/s)(94.4MiB/5001msec) 00:32:47.458 slat (nsec): min=5732, max=60171, avg=7502.88, stdev=2488.97 00:32:47.458 clat (usec): min=2502, max=3944, avg=3272.75, stdev=181.72 00:32:47.458 lat (usec): min=2509, max=3951, avg=3280.26, stdev=182.17 00:32:47.458 clat percentiles (usec): 00:32:47.458 | 1.00th=[ 2966], 5.00th=[ 3032], 10.00th=[ 3064], 20.00th=[ 3097], 00:32:47.458 | 30.00th=[ 3130], 40.00th=[ 3195], 50.00th=[ 3228], 60.00th=[ 3294], 00:32:47.458 | 70.00th=[ 3392], 80.00th=[ 3458], 90.00th=[ 3523], 95.00th=[ 3589], 00:32:47.458 | 99.00th=[ 3720], 99.50th=[ 3752], 99.90th=[ 3851], 99.95th=[ 3851], 00:32:47.458 | 99.99th=[ 3916] 00:32:47.458 bw ( KiB/s): min=18816, max=19968, per=25.13%, avg=19441.78, stdev=401.95, samples=9 00:32:47.458 iops : min= 2352, max= 2496, avg=2430.22, stdev=50.24, samples=9 00:32:47.458 lat (msec) : 4=100.00% 00:32:47.458 cpu : usr=96.04%, sys=3.04%, ctx=102, majf=0, minf=0 00:32:47.458 IO depths : 1=8.1%, 2=25.0%, 4=50.0%, 8=16.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:47.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.458 complete : 0=0.0%, 4=89.3%, 8=10.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.458 issued rwts: total=12088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:47.458 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:47.458 filename1: (groupid=0, jobs=1): err= 0: pid=98420: Mon Jul 15 10:14:00 2024 00:32:47.458 read: IOPS=2417, BW=18.9MiB/s (19.8MB/s)(94.4MiB/5001msec) 00:32:47.458 slat (nsec): min=6057, max=45168, avg=14403.13, stdev=3323.73 00:32:47.458 clat (usec): min=1730, max=5034, avg=3246.15, stdev=248.03 00:32:47.458 lat (usec): min=1741, max=5060, avg=3260.55, stdev=248.38 00:32:47.458 clat percentiles (usec): 00:32:47.458 | 1.00th=[ 2442], 5.00th=[ 2999], 10.00th=[ 3032], 20.00th=[ 3064], 00:32:47.458 | 30.00th=[ 3097], 40.00th=[ 3163], 50.00th=[ 3228], 60.00th=[ 3294], 00:32:47.458 | 70.00th=[ 3359], 80.00th=[ 3425], 90.00th=[ 3523], 95.00th=[ 3621], 00:32:47.458 | 99.00th=[ 3982], 99.50th=[ 4228], 99.90th=[ 4490], 99.95th=[ 4948], 00:32:47.458 | 99.99th=[ 5014] 00:32:47.458 bw ( KiB/s): min=18816, max=19840, per=25.11%, avg=19427.56, stdev=366.41, samples=9 00:32:47.458 iops : min= 2352, max= 2480, avg=2428.44, stdev=45.80, samples=9 00:32:47.458 lat (msec) : 2=0.08%, 4=98.95%, 10=0.97% 00:32:47.458 cpu : usr=96.10%, sys=2.88%, ctx=441, majf=0, minf=9 00:32:47.458 IO depths : 1=7.9%, 2=25.0%, 4=50.0%, 8=17.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:47.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.458 complete : 0=0.0%, 4=89.3%, 8=10.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.458 issued rwts: total=12088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:47.458 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:47.458 00:32:47.458 Run status group 0 (all jobs): 00:32:47.458 READ: bw=75.5MiB/s (79.2MB/s), 18.9MiB/s-18.9MiB/s (19.8MB/s-19.8MB/s), io=378MiB (396MB), run=5001-5003msec 00:32:47.458 10:14:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:32:47.458 10:14:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:47.458 10:14:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:47.458 10:14:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:47.458 10:14:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:47.458 10:14:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:47.458 10:14:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.458 10:14:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:47.458 10:14:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.458 10:14:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:47.458 10:14:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.458 10:14:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:47.458 10:14:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.458 10:14:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:47.458 10:14:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:47.458 10:14:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:32:47.458 10:14:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:47.458 10:14:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.458 10:14:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:47.458 10:14:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.458 10:14:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:47.458 10:14:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.458 10:14:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:47.458 10:14:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.458 00:32:47.458 real 0m23.797s 00:32:47.458 user 2m9.323s 00:32:47.458 sys 0m3.280s 00:32:47.458 ************************************ 00:32:47.458 END TEST fio_dif_rand_params 00:32:47.458 ************************************ 00:32:47.458 10:14:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:47.458 10:14:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:47.458 10:14:00 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:32:47.458 10:14:00 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:32:47.458 10:14:00 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:47.458 10:14:00 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:47.458 10:14:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:47.458 ************************************ 00:32:47.458 START TEST fio_dif_digest 00:32:47.458 ************************************ 00:32:47.458 10:14:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:32:47.458 10:14:00 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:32:47.458 10:14:00 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:32:47.458 10:14:00 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:32:47.458 10:14:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:32:47.458 10:14:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:32:47.458 10:14:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:32:47.458 10:14:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:32:47.458 10:14:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:32:47.458 10:14:00 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:32:47.458 10:14:00 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:32:47.458 10:14:00 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:32:47.458 10:14:00 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:32:47.458 10:14:00 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:32:47.458 10:14:00 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:32:47.458 10:14:00 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:32:47.458 10:14:00 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:32:47.458 10:14:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.458 10:14:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:47.458 bdev_null0 00:32:47.458 10:14:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.458 10:14:00 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:47.458 10:14:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.458 10:14:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:47.458 10:14:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.458 10:14:00 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:47.458 10:14:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.458 10:14:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:47.458 10:14:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.458 10:14:00 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:47.458 10:14:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.458 10:14:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:47.458 [2024-07-15 10:14:00.539231] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:47.458 10:14:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.458 10:14:00 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:32:47.458 10:14:00 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:32:47.459 10:14:00 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:47.459 10:14:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:32:47.459 10:14:00 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:32:47.459 10:14:00 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:47.459 10:14:00 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:32:47.459 10:14:00 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:32:47.459 10:14:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:32:47.459 10:14:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:47.459 10:14:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:47.459 10:14:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:47.459 { 00:32:47.459 "params": { 00:32:47.459 "name": "Nvme$subsystem", 00:32:47.459 "trtype": "$TEST_TRANSPORT", 00:32:47.459 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:47.459 "adrfam": "ipv4", 00:32:47.459 "trsvcid": "$NVMF_PORT", 00:32:47.459 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:47.459 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:47.459 "hdgst": ${hdgst:-false}, 00:32:47.459 "ddgst": ${ddgst:-false} 00:32:47.459 }, 00:32:47.459 "method": "bdev_nvme_attach_controller" 00:32:47.459 } 00:32:47.459 EOF 00:32:47.459 )") 00:32:47.459 10:14:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:47.459 10:14:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:47.459 10:14:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:47.459 10:14:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:47.459 10:14:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:32:47.459 10:14:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:47.459 10:14:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:47.459 10:14:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:32:47.459 10:14:00 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:32:47.459 10:14:00 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:32:47.459 10:14:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:47.459 10:14:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:47.459 10:14:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:32:47.459 10:14:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:32:47.459 10:14:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:32:47.459 10:14:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:47.459 "params": { 00:32:47.459 "name": "Nvme0", 00:32:47.459 "trtype": "tcp", 00:32:47.459 "traddr": "10.0.0.2", 00:32:47.459 "adrfam": "ipv4", 00:32:47.459 "trsvcid": "4420", 00:32:47.459 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:47.459 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:47.459 "hdgst": true, 00:32:47.459 "ddgst": true 00:32:47.459 }, 00:32:47.459 "method": "bdev_nvme_attach_controller" 00:32:47.459 }' 00:32:47.459 10:14:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:47.459 10:14:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:47.459 10:14:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:47.459 10:14:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:32:47.459 10:14:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:47.459 10:14:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:47.459 10:14:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:47.459 10:14:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:47.459 10:14:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:32:47.459 10:14:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:47.459 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:32:47.459 ... 00:32:47.459 fio-3.35 00:32:47.459 Starting 3 threads 00:32:59.691 00:32:59.691 filename0: (groupid=0, jobs=1): err= 0: pid=98526: Mon Jul 15 10:14:11 2024 00:32:59.691 read: IOPS=290, BW=36.3MiB/s (38.0MB/s)(363MiB/10007msec) 00:32:59.691 slat (nsec): min=3749, max=83073, avg=11524.26, stdev=3127.67 00:32:59.691 clat (usec): min=5579, max=53541, avg=10326.99, stdev=4568.87 00:32:59.691 lat (usec): min=5592, max=53557, avg=10338.52, stdev=4568.88 00:32:59.691 clat percentiles (usec): 00:32:59.691 | 1.00th=[ 8160], 5.00th=[ 8586], 10.00th=[ 8979], 20.00th=[ 9241], 00:32:59.691 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[ 9765], 60.00th=[ 9896], 00:32:59.691 | 70.00th=[10159], 80.00th=[10421], 90.00th=[10945], 95.00th=[11338], 00:32:59.691 | 99.00th=[49546], 99.50th=[50594], 99.90th=[52691], 99.95th=[52691], 00:32:59.691 | 99.99th=[53740] 00:32:59.691 bw ( KiB/s): min=32256, max=40704, per=36.82%, avg=37120.00, stdev=2574.78, samples=20 00:32:59.691 iops : min= 252, max= 318, avg=290.00, stdev=20.12, samples=20 00:32:59.691 lat (msec) : 10=62.07%, 20=36.69%, 50=0.59%, 100=0.65% 00:32:59.691 cpu : usr=95.55%, sys=3.42%, ctx=87, majf=0, minf=0 00:32:59.691 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:59.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:59.691 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:59.691 issued rwts: total=2903,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:59.691 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:59.691 filename0: (groupid=0, jobs=1): err= 0: pid=98527: Mon Jul 15 10:14:11 2024 00:32:59.691 read: IOPS=279, BW=35.0MiB/s (36.7MB/s)(350MiB/10002msec) 00:32:59.691 slat (usec): min=5, max=129, avg=11.05, stdev= 3.94 00:32:59.691 clat (usec): min=5083, max=50232, avg=10701.67, stdev=1882.30 00:32:59.691 lat (usec): min=5094, max=50243, avg=10712.72, stdev=1882.55 00:32:59.691 clat percentiles (usec): 00:32:59.691 | 1.00th=[ 6063], 5.00th=[ 7308], 10.00th=[ 9241], 20.00th=[ 9896], 00:32:59.691 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10814], 60.00th=[11076], 00:32:59.691 | 70.00th=[11338], 80.00th=[11600], 90.00th=[12125], 95.00th=[12649], 00:32:59.691 | 99.00th=[13435], 99.50th=[13960], 99.90th=[46924], 99.95th=[49021], 00:32:59.691 | 99.99th=[50070] 00:32:59.691 bw ( KiB/s): min=34048, max=39424, per=35.62%, avg=35907.37, stdev=1348.81, samples=19 00:32:59.691 iops : min= 266, max= 308, avg=280.53, stdev=10.54, samples=19 00:32:59.691 lat (msec) : 10=20.86%, 20=79.04%, 50=0.07%, 100=0.04% 00:32:59.691 cpu : usr=95.46%, sys=3.51%, ctx=176, majf=0, minf=0 00:32:59.691 IO depths : 1=2.4%, 2=97.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:59.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:59.691 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:59.691 issued rwts: total=2800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:59.691 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:59.691 filename0: (groupid=0, jobs=1): err= 0: pid=98528: Mon Jul 15 10:14:11 2024 00:32:59.691 read: IOPS=219, BW=27.5MiB/s (28.8MB/s)(276MiB/10043msec) 00:32:59.691 slat (nsec): min=5946, max=37628, avg=10417.31, stdev=3332.73 00:32:59.691 clat (usec): min=3234, max=45801, avg=13620.70, stdev=2061.95 00:32:59.691 lat (usec): min=3240, max=45813, avg=13631.12, stdev=2062.34 00:32:59.691 clat percentiles (usec): 00:32:59.691 | 1.00th=[ 7832], 5.00th=[ 8979], 10.00th=[12256], 20.00th=[13042], 00:32:59.691 | 30.00th=[13304], 40.00th=[13566], 50.00th=[13829], 60.00th=[14091], 00:32:59.691 | 70.00th=[14353], 80.00th=[14746], 90.00th=[15270], 95.00th=[15795], 00:32:59.691 | 99.00th=[16581], 99.50th=[16712], 99.90th=[17433], 99.95th=[43779], 00:32:59.691 | 99.99th=[45876] 00:32:59.691 bw ( KiB/s): min=26624, max=33280, per=27.99%, avg=28213.85, stdev=1468.36, samples=20 00:32:59.691 iops : min= 208, max= 260, avg=220.40, stdev=11.49, samples=20 00:32:59.691 lat (msec) : 4=0.41%, 10=6.57%, 20=92.93%, 50=0.09% 00:32:59.691 cpu : usr=95.99%, sys=3.16%, ctx=6, majf=0, minf=0 00:32:59.691 IO depths : 1=15.8%, 2=84.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:59.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:59.691 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:59.691 issued rwts: total=2206,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:59.691 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:59.691 00:32:59.691 Run status group 0 (all jobs): 00:32:59.691 READ: bw=98.4MiB/s (103MB/s), 27.5MiB/s-36.3MiB/s (28.8MB/s-38.0MB/s), io=989MiB (1037MB), run=10002-10043msec 00:32:59.691 10:14:11 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:32:59.691 10:14:11 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:32:59.691 10:14:11 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:32:59.691 10:14:11 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:59.691 10:14:11 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:32:59.691 10:14:11 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:59.691 10:14:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.691 10:14:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:59.691 10:14:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.691 10:14:11 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:59.691 10:14:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.691 10:14:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:59.691 ************************************ 00:32:59.691 END TEST fio_dif_digest 00:32:59.691 ************************************ 00:32:59.691 10:14:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.691 00:32:59.691 real 0m11.007s 00:32:59.691 user 0m29.401s 00:32:59.691 sys 0m1.292s 00:32:59.691 10:14:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:59.691 10:14:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:59.691 10:14:11 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:32:59.691 10:14:11 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:32:59.691 10:14:11 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:32:59.691 10:14:11 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:59.691 10:14:11 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:32:59.691 10:14:11 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:59.691 10:14:11 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:32:59.691 10:14:11 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:59.691 10:14:11 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:59.691 rmmod nvme_tcp 00:32:59.691 rmmod nvme_fabrics 00:32:59.691 rmmod nvme_keyring 00:32:59.691 10:14:11 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:59.691 10:14:11 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:32:59.691 10:14:11 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:32:59.691 10:14:11 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 97753 ']' 00:32:59.691 10:14:11 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 97753 00:32:59.691 10:14:11 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 97753 ']' 00:32:59.691 10:14:11 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 97753 00:32:59.691 10:14:11 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:32:59.691 10:14:11 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:59.691 10:14:11 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 97753 00:32:59.691 killing process with pid 97753 00:32:59.691 10:14:11 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:59.691 10:14:11 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:59.691 10:14:11 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 97753' 00:32:59.691 10:14:11 nvmf_dif -- common/autotest_common.sh@967 -- # kill 97753 00:32:59.691 10:14:11 nvmf_dif -- common/autotest_common.sh@972 -- # wait 97753 00:32:59.691 10:14:11 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:32:59.691 10:14:11 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:32:59.691 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:59.691 Waiting for block devices as requested 00:32:59.691 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:32:59.691 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:32:59.691 10:14:12 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:59.691 10:14:12 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:59.691 10:14:12 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:59.691 10:14:12 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:59.691 10:14:12 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:59.691 10:14:12 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:59.691 10:14:12 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:59.691 10:14:12 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:32:59.691 00:32:59.691 real 1m0.475s 00:32:59.691 user 3m57.759s 00:32:59.691 sys 0m11.185s 00:32:59.691 10:14:12 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:59.691 10:14:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:59.691 ************************************ 00:32:59.691 END TEST nvmf_dif 00:32:59.691 ************************************ 00:32:59.691 10:14:12 -- common/autotest_common.sh@1142 -- # return 0 00:32:59.691 10:14:12 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:59.691 10:14:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:59.691 10:14:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:59.691 10:14:12 -- common/autotest_common.sh@10 -- # set +x 00:32:59.691 ************************************ 00:32:59.691 START TEST nvmf_abort_qd_sizes 00:32:59.691 ************************************ 00:32:59.691 10:14:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:59.691 * Looking for test storage... 00:32:59.691 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:32:59.691 10:14:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:59.691 10:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:32:59.691 10:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:59.691 10:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:59.691 10:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:59.691 10:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:59.691 10:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:59.691 10:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:59.691 10:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:59.691 10:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:59.691 10:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:59.691 10:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:59.691 10:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:32:59.691 10:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:32:59.692 10:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:59.692 10:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:59.692 10:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:32:59.692 10:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:59.692 10:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:59.692 10:14:12 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:59.692 10:14:12 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:59.692 10:14:12 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:59.692 10:14:12 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.692 10:14:12 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.692 10:14:12 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.692 10:14:12 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:32:59.692 10:14:12 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.692 10:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:32:59.692 10:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:59.692 10:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:59.692 10:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:59.692 10:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:59.692 10:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:59.692 10:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:59.692 10:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:59.692 10:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:59.692 10:14:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:32:59.692 10:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:59.692 10:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:59.692 10:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:59.692 10:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:59.692 10:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:59.692 10:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:59.692 10:14:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:59.692 10:14:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:59.692 10:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:32:59.692 10:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:32:59.692 10:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:32:59.692 10:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:32:59.692 10:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:32:59.692 10:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:32:59.692 10:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:59.692 10:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:59.692 10:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:32:59.692 10:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:32:59.692 10:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:32:59.692 10:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:32:59.692 10:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:32:59.692 10:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:59.692 10:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:32:59.692 10:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:32:59.692 10:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:32:59.692 10:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:32:59.692 10:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:32:59.692 10:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:32:59.692 Cannot find device "nvmf_tgt_br" 00:32:59.692 10:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:32:59.692 10:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:32:59.692 Cannot find device "nvmf_tgt_br2" 00:32:59.692 10:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:32:59.692 10:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:32:59.692 10:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:32:59.692 Cannot find device "nvmf_tgt_br" 00:32:59.692 10:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:32:59.692 10:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:32:59.692 Cannot find device "nvmf_tgt_br2" 00:32:59.692 10:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:32:59.692 10:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:32:59.692 10:14:13 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:32:59.692 10:14:13 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:59.692 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:59.692 10:14:13 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:32:59.692 10:14:13 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:59.692 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:59.692 10:14:13 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:32:59.692 10:14:13 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:32:59.692 10:14:13 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:32:59.692 10:14:13 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:32:59.692 10:14:13 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:32:59.692 10:14:13 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:32:59.692 10:14:13 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:32:59.692 10:14:13 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:32:59.692 10:14:13 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:32:59.692 10:14:13 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:32:59.692 10:14:13 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:32:59.692 10:14:13 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:32:59.692 10:14:13 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:32:59.692 10:14:13 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:32:59.692 10:14:13 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:32:59.692 10:14:13 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:32:59.692 10:14:13 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:32:59.692 10:14:13 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:32:59.692 10:14:13 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:32:59.692 10:14:13 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:32:59.692 10:14:13 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:32:59.692 10:14:13 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:32:59.692 10:14:13 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:32:59.692 10:14:13 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:32:59.692 10:14:13 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:32:59.692 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:59.692 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:32:59.692 00:32:59.692 --- 10.0.0.2 ping statistics --- 00:32:59.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:59.692 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:32:59.692 10:14:13 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:32:59.692 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:32:59.692 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:32:59.692 00:32:59.692 --- 10.0.0.3 ping statistics --- 00:32:59.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:59.692 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:32:59.692 10:14:13 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:32:59.692 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:59.692 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:32:59.692 00:32:59.692 --- 10.0.0.1 ping statistics --- 00:32:59.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:59.692 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:32:59.692 10:14:13 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:59.692 10:14:13 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:32:59.692 10:14:13 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:32:59.692 10:14:13 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:33:00.628 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:00.628 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:33:00.628 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:33:00.628 10:14:14 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:00.628 10:14:14 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:00.628 10:14:14 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:00.628 10:14:14 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:00.628 10:14:14 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:00.628 10:14:14 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:00.628 10:14:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:33:00.628 10:14:14 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:00.628 10:14:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:00.628 10:14:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:00.887 10:14:14 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=99126 00:33:00.887 10:14:14 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:33:00.887 10:14:14 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 99126 00:33:00.887 10:14:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 99126 ']' 00:33:00.887 10:14:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:00.887 10:14:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:00.887 10:14:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:00.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:00.887 10:14:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:00.887 10:14:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:00.887 [2024-07-15 10:14:14.265530] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:33:00.887 [2024-07-15 10:14:14.265603] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:00.887 [2024-07-15 10:14:14.403782] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:01.147 [2024-07-15 10:14:14.510703] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:01.147 [2024-07-15 10:14:14.510750] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:01.147 [2024-07-15 10:14:14.510756] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:01.147 [2024-07-15 10:14:14.510761] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:01.147 [2024-07-15 10:14:14.510765] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:01.147 [2024-07-15 10:14:14.510847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:01.147 [2024-07-15 10:14:14.511108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:01.147 [2024-07-15 10:14:14.512169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:01.147 [2024-07-15 10:14:14.512172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:33:01.717 10:14:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:01.717 10:14:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:33:01.717 10:14:15 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:01.717 10:14:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:01.717 10:14:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:01.717 10:14:15 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:01.717 10:14:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:33:01.717 10:14:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:33:01.717 10:14:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:33:01.717 10:14:15 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:33:01.717 10:14:15 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:33:01.717 10:14:15 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:33:01.717 10:14:15 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:33:01.717 10:14:15 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:33:01.717 10:14:15 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:33:01.717 10:14:15 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:33:01.717 10:14:15 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:33:01.717 10:14:15 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:33:01.717 10:14:15 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:33:01.717 10:14:15 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:33:01.717 10:14:15 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:33:01.717 10:14:15 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:33:01.717 10:14:15 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:33:01.717 10:14:15 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:33:01.717 10:14:15 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:33:01.717 10:14:15 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:33:01.717 10:14:15 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:33:01.717 10:14:15 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:33:01.717 10:14:15 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:33:01.717 10:14:15 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:33:01.717 10:14:15 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:33:01.717 10:14:15 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:33:01.717 10:14:15 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:33:01.717 10:14:15 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:33:01.717 10:14:15 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:33:01.717 10:14:15 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:33:01.717 10:14:15 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:33:01.717 10:14:15 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:33:01.717 10:14:15 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:33:01.717 10:14:15 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:33:01.717 10:14:15 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:33:01.717 10:14:15 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:33:01.717 10:14:15 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:33:01.717 10:14:15 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:33:01.717 10:14:15 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:33:01.717 10:14:15 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:33:01.717 10:14:15 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:33:01.717 10:14:15 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:33:01.717 10:14:15 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:33:01.717 10:14:15 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:33:01.717 10:14:15 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:33:01.717 10:14:15 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:33:01.717 10:14:15 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:33:01.717 10:14:15 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:33:01.717 10:14:15 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:33:01.717 10:14:15 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:33:01.717 10:14:15 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:33:01.717 10:14:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:33:01.717 10:14:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:33:01.717 10:14:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:33:01.717 10:14:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:01.717 10:14:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:01.717 10:14:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:01.717 ************************************ 00:33:01.717 START TEST spdk_target_abort 00:33:01.717 ************************************ 00:33:01.717 10:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:33:01.718 10:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:33:01.718 10:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:33:01.718 10:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.718 10:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:01.978 spdk_targetn1 00:33:01.978 10:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.978 10:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:01.978 10:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.978 10:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:01.978 [2024-07-15 10:14:15.325907] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:01.978 10:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.978 10:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:33:01.978 10:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.978 10:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:01.978 10:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.978 10:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:33:01.978 10:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.978 10:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:01.978 10:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.978 10:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:33:01.978 10:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.978 10:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:01.978 [2024-07-15 10:14:15.365995] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:01.978 10:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.978 10:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:33:01.978 10:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:33:01.978 10:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:33:01.978 10:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:33:01.978 10:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:33:01.978 10:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:33:01.978 10:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:33:01.978 10:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:33:01.978 10:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:33:01.978 10:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:01.978 10:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:33:01.978 10:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:01.978 10:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:33:01.978 10:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:01.978 10:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:33:01.978 10:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:01.978 10:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:01.978 10:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:01.978 10:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:01.978 10:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:01.978 10:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:05.264 Initializing NVMe Controllers 00:33:05.264 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:05.264 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:05.264 Initialization complete. Launching workers. 00:33:05.264 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12688, failed: 0 00:33:05.264 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1145, failed to submit 11543 00:33:05.264 success 768, unsuccess 377, failed 0 00:33:05.264 10:14:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:05.264 10:14:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:08.542 Initializing NVMe Controllers 00:33:08.542 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:08.542 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:08.542 Initialization complete. Launching workers. 00:33:08.542 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 5948, failed: 0 00:33:08.542 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1225, failed to submit 4723 00:33:08.542 success 307, unsuccess 918, failed 0 00:33:08.542 10:14:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:08.542 10:14:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:11.821 Initializing NVMe Controllers 00:33:11.821 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:11.821 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:11.821 Initialization complete. Launching workers. 00:33:11.821 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31353, failed: 0 00:33:11.821 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2800, failed to submit 28553 00:33:11.821 success 514, unsuccess 2286, failed 0 00:33:11.821 10:14:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:33:11.821 10:14:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.821 10:14:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:11.821 10:14:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.821 10:14:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:33:11.821 10:14:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.821 10:14:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:14.346 10:14:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.346 10:14:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 99126 00:33:14.346 10:14:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 99126 ']' 00:33:14.346 10:14:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 99126 00:33:14.346 10:14:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:33:14.346 10:14:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:14.346 10:14:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 99126 00:33:14.346 10:14:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:14.346 10:14:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:14.346 10:14:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 99126' 00:33:14.346 killing process with pid 99126 00:33:14.346 10:14:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 99126 00:33:14.346 10:14:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 99126 00:33:14.346 00:33:14.346 real 0m12.556s 00:33:14.346 user 0m48.980s 00:33:14.346 sys 0m1.406s 00:33:14.346 10:14:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:14.346 10:14:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:14.346 ************************************ 00:33:14.346 END TEST spdk_target_abort 00:33:14.346 ************************************ 00:33:14.346 10:14:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:33:14.346 10:14:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:33:14.347 10:14:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:14.347 10:14:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:14.347 10:14:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:14.347 ************************************ 00:33:14.347 START TEST kernel_target_abort 00:33:14.347 ************************************ 00:33:14.347 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:33:14.347 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:33:14.347 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:33:14.347 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:14.347 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:14.347 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:14.347 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:14.347 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:14.347 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:14.347 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:14.347 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:14.347 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:14.347 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:33:14.347 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:33:14.347 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:33:14.347 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:14.347 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:14.347 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:14.347 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:33:14.347 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:33:14.347 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:33:14.347 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:14.347 10:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:33:14.914 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:14.914 Waiting for block devices as requested 00:33:14.914 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:33:15.171 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:33:15.171 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:33:15.171 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:15.171 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:33:15.171 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:33:15.171 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:15.171 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:33:15.171 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:33:15.171 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:33:15.172 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:33:15.172 No valid GPT data, bailing 00:33:15.172 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:15.172 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:33:15.172 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:33:15.172 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:33:15.172 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:33:15.172 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:33:15.172 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:33:15.172 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:33:15.172 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:33:15.172 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:33:15.172 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:33:15.172 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:33:15.172 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:33:15.172 No valid GPT data, bailing 00:33:15.172 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:33:15.430 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:33:15.430 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:33:15.430 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:33:15.430 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:33:15.430 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:33:15.430 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:33:15.431 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:33:15.431 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:33:15.431 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:33:15.431 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:33:15.431 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:33:15.431 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:33:15.431 No valid GPT data, bailing 00:33:15.431 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:33:15.431 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:33:15.431 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:33:15.431 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:33:15.431 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:33:15.431 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:33:15.431 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:33:15.431 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:33:15.431 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:33:15.431 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:33:15.431 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:33:15.431 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:33:15.431 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:33:15.431 No valid GPT data, bailing 00:33:15.431 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:33:15.431 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:33:15.431 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:33:15.431 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:33:15.431 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:33:15.431 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:15.431 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:15.431 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:15.431 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:33:15.431 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:33:15.431 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:33:15.431 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:33:15.431 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:33:15.431 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:33:15.431 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:33:15.431 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:33:15.431 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:15.431 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec --hostid=a2b6b25a-cc90-4aea-9f09-c06f8a634aec -a 10.0.0.1 -t tcp -s 4420 00:33:15.431 00:33:15.431 Discovery Log Number of Records 2, Generation counter 2 00:33:15.431 =====Discovery Log Entry 0====== 00:33:15.431 trtype: tcp 00:33:15.431 adrfam: ipv4 00:33:15.431 subtype: current discovery subsystem 00:33:15.431 treq: not specified, sq flow control disable supported 00:33:15.431 portid: 1 00:33:15.431 trsvcid: 4420 00:33:15.431 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:15.431 traddr: 10.0.0.1 00:33:15.431 eflags: none 00:33:15.431 sectype: none 00:33:15.431 =====Discovery Log Entry 1====== 00:33:15.431 trtype: tcp 00:33:15.431 adrfam: ipv4 00:33:15.431 subtype: nvme subsystem 00:33:15.431 treq: not specified, sq flow control disable supported 00:33:15.431 portid: 1 00:33:15.431 trsvcid: 4420 00:33:15.431 subnqn: nqn.2016-06.io.spdk:testnqn 00:33:15.431 traddr: 10.0.0.1 00:33:15.431 eflags: none 00:33:15.431 sectype: none 00:33:15.431 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:33:15.431 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:33:15.431 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:33:15.431 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:33:15.431 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:33:15.431 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:33:15.431 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:33:15.431 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:33:15.431 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:33:15.431 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:15.431 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:33:15.431 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:15.431 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:33:15.431 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:15.431 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:33:15.431 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:15.431 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:33:15.431 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:15.431 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:15.431 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:15.431 10:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:18.717 Initializing NVMe Controllers 00:33:18.717 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:18.717 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:18.717 Initialization complete. Launching workers. 00:33:18.717 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 41791, failed: 0 00:33:18.717 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 41791, failed to submit 0 00:33:18.717 success 0, unsuccess 41791, failed 0 00:33:18.717 10:14:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:18.717 10:14:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:22.002 Initializing NVMe Controllers 00:33:22.002 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:22.002 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:22.002 Initialization complete. Launching workers. 00:33:22.002 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 85305, failed: 0 00:33:22.002 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 38750, failed to submit 46555 00:33:22.002 success 0, unsuccess 38750, failed 0 00:33:22.002 10:14:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:22.002 10:14:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:25.289 Initializing NVMe Controllers 00:33:25.289 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:25.289 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:25.289 Initialization complete. Launching workers. 00:33:25.289 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 104293, failed: 0 00:33:25.289 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 26086, failed to submit 78207 00:33:25.289 success 0, unsuccess 26086, failed 0 00:33:25.289 10:14:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:33:25.289 10:14:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:33:25.289 10:14:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:33:25.289 10:14:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:25.289 10:14:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:25.289 10:14:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:25.289 10:14:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:25.289 10:14:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:33:25.289 10:14:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:33:25.289 10:14:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:33:25.860 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:32.425 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:33:32.425 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:33:32.425 00:33:32.425 real 0m18.052s 00:33:32.425 user 0m7.019s 00:33:32.425 sys 0m8.792s 00:33:32.425 ************************************ 00:33:32.425 END TEST kernel_target_abort 00:33:32.425 ************************************ 00:33:32.425 10:14:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:32.425 10:14:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:32.425 10:14:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:33:32.425 10:14:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:33:32.425 10:14:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:33:32.425 10:14:45 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:32.425 10:14:45 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:33:32.425 10:14:45 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:32.425 10:14:45 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:33:32.425 10:14:45 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:32.425 10:14:45 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:32.425 rmmod nvme_tcp 00:33:32.425 rmmod nvme_fabrics 00:33:32.684 rmmod nvme_keyring 00:33:32.684 10:14:46 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:32.684 10:14:46 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:33:32.684 10:14:46 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:33:32.684 10:14:46 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 99126 ']' 00:33:32.684 10:14:46 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 99126 00:33:32.684 10:14:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 99126 ']' 00:33:32.684 10:14:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 99126 00:33:32.684 Process with pid 99126 is not found 00:33:32.684 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (99126) - No such process 00:33:32.684 10:14:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 99126 is not found' 00:33:32.684 10:14:46 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:33:32.684 10:14:46 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:33:32.976 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:32.976 Waiting for block devices as requested 00:33:33.241 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:33:33.241 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:33:33.241 10:14:46 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:33.241 10:14:46 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:33.241 10:14:46 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:33.241 10:14:46 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:33.241 10:14:46 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:33.241 10:14:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:33.241 10:14:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:33.241 10:14:46 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:33:33.241 00:33:33.241 real 0m34.089s 00:33:33.241 user 0m57.153s 00:33:33.241 sys 0m11.913s 00:33:33.241 10:14:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:33.241 10:14:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:33.241 ************************************ 00:33:33.241 END TEST nvmf_abort_qd_sizes 00:33:33.241 ************************************ 00:33:33.500 10:14:46 -- common/autotest_common.sh@1142 -- # return 0 00:33:33.500 10:14:46 -- spdk/autotest.sh@295 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:33:33.500 10:14:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:33.500 10:14:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:33.500 10:14:46 -- common/autotest_common.sh@10 -- # set +x 00:33:33.500 ************************************ 00:33:33.500 START TEST keyring_file 00:33:33.500 ************************************ 00:33:33.500 10:14:46 keyring_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:33:33.500 * Looking for test storage... 00:33:33.500 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:33:33.500 10:14:46 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:33:33.500 10:14:46 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:33.500 10:14:46 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:33:33.500 10:14:46 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:33.500 10:14:46 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:33.500 10:14:46 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:33.500 10:14:46 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:33.500 10:14:46 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:33.500 10:14:46 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:33.500 10:14:46 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:33.500 10:14:46 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:33.500 10:14:46 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:33.500 10:14:46 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:33.500 10:14:47 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:33:33.500 10:14:47 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:33:33.500 10:14:47 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:33.500 10:14:47 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:33.500 10:14:47 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:33.500 10:14:47 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:33.500 10:14:47 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:33.500 10:14:47 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:33.500 10:14:47 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:33.500 10:14:47 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:33.500 10:14:47 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:33.500 10:14:47 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:33.500 10:14:47 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:33.500 10:14:47 keyring_file -- paths/export.sh@5 -- # export PATH 00:33:33.500 10:14:47 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:33.500 10:14:47 keyring_file -- nvmf/common.sh@47 -- # : 0 00:33:33.500 10:14:47 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:33.500 10:14:47 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:33.500 10:14:47 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:33.500 10:14:47 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:33.500 10:14:47 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:33.500 10:14:47 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:33.500 10:14:47 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:33.500 10:14:47 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:33.500 10:14:47 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:33:33.500 10:14:47 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:33:33.500 10:14:47 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:33:33.500 10:14:47 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:33:33.500 10:14:47 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:33:33.500 10:14:47 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:33:33.500 10:14:47 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:33.500 10:14:47 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:33.500 10:14:47 keyring_file -- keyring/common.sh@17 -- # name=key0 00:33:33.500 10:14:47 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:33.500 10:14:47 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:33.500 10:14:47 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:33.500 10:14:47 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.5azciskq19 00:33:33.500 10:14:47 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:33.500 10:14:47 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:33.500 10:14:47 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:33:33.500 10:14:47 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:33.500 10:14:47 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:33:33.500 10:14:47 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:33:33.500 10:14:47 keyring_file -- nvmf/common.sh@705 -- # python - 00:33:33.760 10:14:47 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.5azciskq19 00:33:33.760 10:14:47 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.5azciskq19 00:33:33.760 10:14:47 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.5azciskq19 00:33:33.760 10:14:47 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:33:33.760 10:14:47 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:33.760 10:14:47 keyring_file -- keyring/common.sh@17 -- # name=key1 00:33:33.760 10:14:47 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:33:33.760 10:14:47 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:33.760 10:14:47 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:33.760 10:14:47 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.yN8FeKvBUi 00:33:33.760 10:14:47 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:33:33.760 10:14:47 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:33:33.760 10:14:47 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:33:33.760 10:14:47 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:33.760 10:14:47 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:33:33.760 10:14:47 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:33:33.760 10:14:47 keyring_file -- nvmf/common.sh@705 -- # python - 00:33:33.760 10:14:47 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.yN8FeKvBUi 00:33:33.760 10:14:47 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.yN8FeKvBUi 00:33:33.760 10:14:47 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.yN8FeKvBUi 00:33:33.760 10:14:47 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:33.760 10:14:47 keyring_file -- keyring/file.sh@30 -- # tgtpid=100104 00:33:33.760 10:14:47 keyring_file -- keyring/file.sh@32 -- # waitforlisten 100104 00:33:33.760 10:14:47 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 100104 ']' 00:33:33.760 10:14:47 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:33.760 10:14:47 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:33.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:33.760 10:14:47 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:33.760 10:14:47 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:33.760 10:14:47 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:33.760 [2024-07-15 10:14:47.198030] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:33:33.760 [2024-07-15 10:14:47.198097] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100104 ] 00:33:33.760 [2024-07-15 10:14:47.335137] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:34.019 [2024-07-15 10:14:47.441123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:34.589 10:14:48 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:34.589 10:14:48 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:33:34.589 10:14:48 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:33:34.589 10:14:48 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.589 10:14:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:34.589 [2024-07-15 10:14:48.069772] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:34.589 null0 00:33:34.589 [2024-07-15 10:14:48.101669] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:33:34.589 [2024-07-15 10:14:48.101864] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:34.589 [2024-07-15 10:14:48.109653] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:33:34.589 10:14:48 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.589 10:14:48 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:34.589 10:14:48 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:33:34.589 10:14:48 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:34.589 10:14:48 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:34.589 10:14:48 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:34.589 10:14:48 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:34.589 10:14:48 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:34.589 10:14:48 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:34.589 10:14:48 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.589 10:14:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:34.589 [2024-07-15 10:14:48.125629] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:33:34.589 2024/07/15 10:14:48 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 00:33:34.589 request: 00:33:34.589 { 00:33:34.589 "method": "nvmf_subsystem_add_listener", 00:33:34.589 "params": { 00:33:34.589 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:33:34.589 "secure_channel": false, 00:33:34.589 "listen_address": { 00:33:34.589 "trtype": "tcp", 00:33:34.589 "traddr": "127.0.0.1", 00:33:34.589 "trsvcid": "4420" 00:33:34.589 } 00:33:34.589 } 00:33:34.589 } 00:33:34.589 Got JSON-RPC error response 00:33:34.589 GoRPCClient: error on JSON-RPC call 00:33:34.589 10:14:48 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:34.589 10:14:48 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:33:34.589 10:14:48 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:34.589 10:14:48 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:34.589 10:14:48 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:34.589 10:14:48 keyring_file -- keyring/file.sh@46 -- # bperfpid=100138 00:33:34.589 10:14:48 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:33:34.589 10:14:48 keyring_file -- keyring/file.sh@48 -- # waitforlisten 100138 /var/tmp/bperf.sock 00:33:34.589 10:14:48 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 100138 ']' 00:33:34.589 10:14:48 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:34.589 10:14:48 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:34.589 10:14:48 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:34.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:34.589 10:14:48 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:34.589 10:14:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:34.848 [2024-07-15 10:14:48.182276] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:33:34.848 [2024-07-15 10:14:48.182354] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100138 ] 00:33:34.848 [2024-07-15 10:14:48.317915] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:34.848 [2024-07-15 10:14:48.422550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:35.783 10:14:49 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:35.783 10:14:49 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:33:35.783 10:14:49 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.5azciskq19 00:33:35.783 10:14:49 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.5azciskq19 00:33:35.783 10:14:49 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.yN8FeKvBUi 00:33:35.783 10:14:49 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.yN8FeKvBUi 00:33:36.040 10:14:49 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:33:36.040 10:14:49 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:33:36.040 10:14:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:36.040 10:14:49 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:36.040 10:14:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:36.298 10:14:49 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.5azciskq19 == \/\t\m\p\/\t\m\p\.\5\a\z\c\i\s\k\q\1\9 ]] 00:33:36.298 10:14:49 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:33:36.298 10:14:49 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:33:36.298 10:14:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:36.298 10:14:49 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:36.298 10:14:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:36.556 10:14:49 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.yN8FeKvBUi == \/\t\m\p\/\t\m\p\.\y\N\8\F\e\K\v\B\U\i ]] 00:33:36.556 10:14:49 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:33:36.556 10:14:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:36.556 10:14:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:36.556 10:14:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:36.556 10:14:49 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:36.556 10:14:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:36.556 10:14:50 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:33:36.556 10:14:50 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:33:36.556 10:14:50 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:36.556 10:14:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:36.556 10:14:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:36.556 10:14:50 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:36.556 10:14:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:36.814 10:14:50 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:33:36.814 10:14:50 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:36.814 10:14:50 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:37.072 [2024-07-15 10:14:50.484040] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:37.072 nvme0n1 00:33:37.072 10:14:50 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:33:37.072 10:14:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:37.072 10:14:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:37.072 10:14:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:37.072 10:14:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:37.072 10:14:50 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:37.329 10:14:50 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:33:37.329 10:14:50 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:33:37.329 10:14:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:37.329 10:14:50 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:37.329 10:14:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:37.329 10:14:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:37.329 10:14:50 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:37.587 10:14:50 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:33:37.587 10:14:50 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:37.587 Running I/O for 1 seconds... 00:33:38.522 00:33:38.523 Latency(us) 00:33:38.523 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:38.523 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:33:38.523 nvme0n1 : 1.00 18159.32 70.93 0.00 0.00 7034.15 3448.51 16140.74 00:33:38.523 =================================================================================================================== 00:33:38.523 Total : 18159.32 70.93 0.00 0.00 7034.15 3448.51 16140.74 00:33:38.523 0 00:33:38.523 10:14:52 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:38.523 10:14:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:38.788 10:14:52 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:33:38.788 10:14:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:38.788 10:14:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:38.788 10:14:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:38.788 10:14:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:38.788 10:14:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:39.048 10:14:52 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:33:39.048 10:14:52 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:33:39.048 10:14:52 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:39.048 10:14:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:39.048 10:14:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:39.048 10:14:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:39.048 10:14:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:39.306 10:14:52 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:33:39.306 10:14:52 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:39.306 10:14:52 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:33:39.306 10:14:52 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:39.306 10:14:52 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:33:39.306 10:14:52 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:39.306 10:14:52 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:33:39.306 10:14:52 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:39.306 10:14:52 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:39.306 10:14:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:39.565 [2024-07-15 10:14:52.995066] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:39.565 [2024-07-15 10:14:52.995268] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f18f30 (107): Transport endpoint is not connected 00:33:39.565 [2024-07-15 10:14:52.996254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f18f30 (9): Bad file descriptor 00:33:39.565 [2024-07-15 10:14:52.997251] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:39.565 [2024-07-15 10:14:52.997272] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:39.565 [2024-07-15 10:14:52.997279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:39.565 2024/07/15 10:14:52 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:33:39.565 request: 00:33:39.565 { 00:33:39.565 "method": "bdev_nvme_attach_controller", 00:33:39.565 "params": { 00:33:39.565 "name": "nvme0", 00:33:39.565 "trtype": "tcp", 00:33:39.565 "traddr": "127.0.0.1", 00:33:39.565 "adrfam": "ipv4", 00:33:39.565 "trsvcid": "4420", 00:33:39.565 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:39.565 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:39.565 "prchk_reftag": false, 00:33:39.565 "prchk_guard": false, 00:33:39.565 "hdgst": false, 00:33:39.565 "ddgst": false, 00:33:39.565 "psk": "key1" 00:33:39.565 } 00:33:39.565 } 00:33:39.565 Got JSON-RPC error response 00:33:39.565 GoRPCClient: error on JSON-RPC call 00:33:39.565 10:14:53 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:33:39.565 10:14:53 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:39.565 10:14:53 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:39.565 10:14:53 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:39.565 10:14:53 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:33:39.565 10:14:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:39.565 10:14:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:39.565 10:14:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:39.565 10:14:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:39.565 10:14:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:39.823 10:14:53 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:33:39.823 10:14:53 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:33:39.823 10:14:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:39.823 10:14:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:39.823 10:14:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:39.823 10:14:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:39.823 10:14:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:40.099 10:14:53 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:33:40.099 10:14:53 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:33:40.099 10:14:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:40.099 10:14:53 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:33:40.099 10:14:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:33:40.357 10:14:53 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:33:40.357 10:14:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:40.357 10:14:53 keyring_file -- keyring/file.sh@77 -- # jq length 00:33:40.614 10:14:54 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:33:40.614 10:14:54 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.5azciskq19 00:33:40.614 10:14:54 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.5azciskq19 00:33:40.614 10:14:54 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:33:40.614 10:14:54 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.5azciskq19 00:33:40.614 10:14:54 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:33:40.614 10:14:54 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:40.614 10:14:54 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:33:40.614 10:14:54 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:40.614 10:14:54 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.5azciskq19 00:33:40.614 10:14:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.5azciskq19 00:33:40.873 [2024-07-15 10:14:54.225304] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.5azciskq19': 0100660 00:33:40.873 [2024-07-15 10:14:54.225345] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:33:40.873 2024/07/15 10:14:54 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.5azciskq19], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:33:40.873 request: 00:33:40.873 { 00:33:40.873 "method": "keyring_file_add_key", 00:33:40.873 "params": { 00:33:40.873 "name": "key0", 00:33:40.873 "path": "/tmp/tmp.5azciskq19" 00:33:40.873 } 00:33:40.873 } 00:33:40.873 Got JSON-RPC error response 00:33:40.874 GoRPCClient: error on JSON-RPC call 00:33:40.874 10:14:54 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:33:40.874 10:14:54 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:40.874 10:14:54 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:40.874 10:14:54 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:40.874 10:14:54 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.5azciskq19 00:33:40.874 10:14:54 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.5azciskq19 00:33:40.874 10:14:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.5azciskq19 00:33:40.874 10:14:54 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.5azciskq19 00:33:40.874 10:14:54 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:33:40.874 10:14:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:40.874 10:14:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:40.874 10:14:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:40.874 10:14:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:40.874 10:14:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:41.133 10:14:54 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:33:41.133 10:14:54 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:41.133 10:14:54 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:33:41.133 10:14:54 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:41.133 10:14:54 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:33:41.133 10:14:54 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:41.133 10:14:54 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:33:41.133 10:14:54 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:41.133 10:14:54 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:41.133 10:14:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:41.393 [2024-07-15 10:14:54.820314] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.5azciskq19': No such file or directory 00:33:41.393 [2024-07-15 10:14:54.820353] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:33:41.393 [2024-07-15 10:14:54.820374] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:33:41.393 [2024-07-15 10:14:54.820380] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:41.393 [2024-07-15 10:14:54.820390] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:33:41.393 2024/07/15 10:14:54 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 00:33:41.393 request: 00:33:41.393 { 00:33:41.393 "method": "bdev_nvme_attach_controller", 00:33:41.393 "params": { 00:33:41.393 "name": "nvme0", 00:33:41.393 "trtype": "tcp", 00:33:41.393 "traddr": "127.0.0.1", 00:33:41.393 "adrfam": "ipv4", 00:33:41.393 "trsvcid": "4420", 00:33:41.393 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:41.393 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:41.393 "prchk_reftag": false, 00:33:41.393 "prchk_guard": false, 00:33:41.393 "hdgst": false, 00:33:41.393 "ddgst": false, 00:33:41.393 "psk": "key0" 00:33:41.393 } 00:33:41.393 } 00:33:41.393 Got JSON-RPC error response 00:33:41.393 GoRPCClient: error on JSON-RPC call 00:33:41.393 10:14:54 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:33:41.393 10:14:54 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:41.393 10:14:54 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:41.393 10:14:54 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:41.393 10:14:54 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:33:41.393 10:14:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:41.651 10:14:55 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:41.651 10:14:55 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:41.651 10:14:55 keyring_file -- keyring/common.sh@17 -- # name=key0 00:33:41.651 10:14:55 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:41.651 10:14:55 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:41.651 10:14:55 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:41.651 10:14:55 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.eikiKkNSdz 00:33:41.651 10:14:55 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:41.651 10:14:55 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:41.651 10:14:55 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:33:41.651 10:14:55 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:41.651 10:14:55 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:33:41.651 10:14:55 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:33:41.651 10:14:55 keyring_file -- nvmf/common.sh@705 -- # python - 00:33:41.651 10:14:55 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.eikiKkNSdz 00:33:41.651 10:14:55 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.eikiKkNSdz 00:33:41.651 10:14:55 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.eikiKkNSdz 00:33:41.651 10:14:55 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.eikiKkNSdz 00:33:41.651 10:14:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.eikiKkNSdz 00:33:41.910 10:14:55 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:41.910 10:14:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:42.169 nvme0n1 00:33:42.169 10:14:55 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:33:42.169 10:14:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:42.169 10:14:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:42.169 10:14:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:42.169 10:14:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:42.169 10:14:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:42.427 10:14:55 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:33:42.427 10:14:55 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:33:42.427 10:14:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:42.427 10:14:55 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:33:42.427 10:14:56 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:33:42.427 10:14:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:42.427 10:14:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:42.427 10:14:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:42.685 10:14:56 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:33:42.685 10:14:56 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:33:42.685 10:14:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:42.685 10:14:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:42.685 10:14:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:42.685 10:14:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:42.685 10:14:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:42.943 10:14:56 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:33:42.943 10:14:56 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:42.943 10:14:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:43.216 10:14:56 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:33:43.216 10:14:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:43.216 10:14:56 keyring_file -- keyring/file.sh@104 -- # jq length 00:33:43.475 10:14:56 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:33:43.475 10:14:56 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.eikiKkNSdz 00:33:43.475 10:14:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.eikiKkNSdz 00:33:43.475 10:14:57 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.yN8FeKvBUi 00:33:43.475 10:14:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.yN8FeKvBUi 00:33:43.732 10:14:57 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:43.732 10:14:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:43.990 nvme0n1 00:33:43.990 10:14:57 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:33:43.990 10:14:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:33:44.248 10:14:57 keyring_file -- keyring/file.sh@112 -- # config='{ 00:33:44.248 "subsystems": [ 00:33:44.248 { 00:33:44.248 "subsystem": "keyring", 00:33:44.248 "config": [ 00:33:44.248 { 00:33:44.248 "method": "keyring_file_add_key", 00:33:44.248 "params": { 00:33:44.248 "name": "key0", 00:33:44.248 "path": "/tmp/tmp.eikiKkNSdz" 00:33:44.248 } 00:33:44.248 }, 00:33:44.248 { 00:33:44.248 "method": "keyring_file_add_key", 00:33:44.248 "params": { 00:33:44.248 "name": "key1", 00:33:44.248 "path": "/tmp/tmp.yN8FeKvBUi" 00:33:44.248 } 00:33:44.248 } 00:33:44.248 ] 00:33:44.248 }, 00:33:44.248 { 00:33:44.248 "subsystem": "iobuf", 00:33:44.248 "config": [ 00:33:44.248 { 00:33:44.248 "method": "iobuf_set_options", 00:33:44.248 "params": { 00:33:44.248 "large_bufsize": 135168, 00:33:44.248 "large_pool_count": 1024, 00:33:44.248 "small_bufsize": 8192, 00:33:44.248 "small_pool_count": 8192 00:33:44.248 } 00:33:44.248 } 00:33:44.248 ] 00:33:44.248 }, 00:33:44.248 { 00:33:44.248 "subsystem": "sock", 00:33:44.248 "config": [ 00:33:44.248 { 00:33:44.248 "method": "sock_set_default_impl", 00:33:44.248 "params": { 00:33:44.248 "impl_name": "posix" 00:33:44.248 } 00:33:44.248 }, 00:33:44.248 { 00:33:44.248 "method": "sock_impl_set_options", 00:33:44.248 "params": { 00:33:44.248 "enable_ktls": false, 00:33:44.248 "enable_placement_id": 0, 00:33:44.248 "enable_quickack": false, 00:33:44.248 "enable_recv_pipe": true, 00:33:44.248 "enable_zerocopy_send_client": false, 00:33:44.248 "enable_zerocopy_send_server": true, 00:33:44.248 "impl_name": "ssl", 00:33:44.248 "recv_buf_size": 4096, 00:33:44.248 "send_buf_size": 4096, 00:33:44.248 "tls_version": 0, 00:33:44.248 "zerocopy_threshold": 0 00:33:44.248 } 00:33:44.248 }, 00:33:44.248 { 00:33:44.248 "method": "sock_impl_set_options", 00:33:44.248 "params": { 00:33:44.248 "enable_ktls": false, 00:33:44.248 "enable_placement_id": 0, 00:33:44.248 "enable_quickack": false, 00:33:44.248 "enable_recv_pipe": true, 00:33:44.248 "enable_zerocopy_send_client": false, 00:33:44.248 "enable_zerocopy_send_server": true, 00:33:44.248 "impl_name": "posix", 00:33:44.248 "recv_buf_size": 2097152, 00:33:44.248 "send_buf_size": 2097152, 00:33:44.248 "tls_version": 0, 00:33:44.248 "zerocopy_threshold": 0 00:33:44.248 } 00:33:44.248 } 00:33:44.248 ] 00:33:44.248 }, 00:33:44.248 { 00:33:44.248 "subsystem": "vmd", 00:33:44.248 "config": [] 00:33:44.248 }, 00:33:44.248 { 00:33:44.248 "subsystem": "accel", 00:33:44.248 "config": [ 00:33:44.248 { 00:33:44.248 "method": "accel_set_options", 00:33:44.248 "params": { 00:33:44.248 "buf_count": 2048, 00:33:44.248 "large_cache_size": 16, 00:33:44.248 "sequence_count": 2048, 00:33:44.248 "small_cache_size": 128, 00:33:44.248 "task_count": 2048 00:33:44.248 } 00:33:44.248 } 00:33:44.248 ] 00:33:44.248 }, 00:33:44.248 { 00:33:44.248 "subsystem": "bdev", 00:33:44.248 "config": [ 00:33:44.248 { 00:33:44.248 "method": "bdev_set_options", 00:33:44.248 "params": { 00:33:44.248 "bdev_auto_examine": true, 00:33:44.248 "bdev_io_cache_size": 256, 00:33:44.248 "bdev_io_pool_size": 65535, 00:33:44.248 "iobuf_large_cache_size": 16, 00:33:44.248 "iobuf_small_cache_size": 128 00:33:44.248 } 00:33:44.248 }, 00:33:44.248 { 00:33:44.248 "method": "bdev_raid_set_options", 00:33:44.248 "params": { 00:33:44.248 "process_window_size_kb": 1024 00:33:44.248 } 00:33:44.248 }, 00:33:44.248 { 00:33:44.248 "method": "bdev_iscsi_set_options", 00:33:44.248 "params": { 00:33:44.248 "timeout_sec": 30 00:33:44.248 } 00:33:44.248 }, 00:33:44.248 { 00:33:44.248 "method": "bdev_nvme_set_options", 00:33:44.248 "params": { 00:33:44.248 "action_on_timeout": "none", 00:33:44.248 "allow_accel_sequence": false, 00:33:44.248 "arbitration_burst": 0, 00:33:44.248 "bdev_retry_count": 3, 00:33:44.248 "ctrlr_loss_timeout_sec": 0, 00:33:44.248 "delay_cmd_submit": true, 00:33:44.248 "dhchap_dhgroups": [ 00:33:44.248 "null", 00:33:44.248 "ffdhe2048", 00:33:44.248 "ffdhe3072", 00:33:44.248 "ffdhe4096", 00:33:44.248 "ffdhe6144", 00:33:44.248 "ffdhe8192" 00:33:44.248 ], 00:33:44.248 "dhchap_digests": [ 00:33:44.248 "sha256", 00:33:44.248 "sha384", 00:33:44.248 "sha512" 00:33:44.248 ], 00:33:44.248 "disable_auto_failback": false, 00:33:44.248 "fast_io_fail_timeout_sec": 0, 00:33:44.248 "generate_uuids": false, 00:33:44.248 "high_priority_weight": 0, 00:33:44.248 "io_path_stat": false, 00:33:44.248 "io_queue_requests": 512, 00:33:44.248 "keep_alive_timeout_ms": 10000, 00:33:44.248 "low_priority_weight": 0, 00:33:44.248 "medium_priority_weight": 0, 00:33:44.248 "nvme_adminq_poll_period_us": 10000, 00:33:44.248 "nvme_error_stat": false, 00:33:44.248 "nvme_ioq_poll_period_us": 0, 00:33:44.248 "rdma_cm_event_timeout_ms": 0, 00:33:44.248 "rdma_max_cq_size": 0, 00:33:44.248 "rdma_srq_size": 0, 00:33:44.248 "reconnect_delay_sec": 0, 00:33:44.248 "timeout_admin_us": 0, 00:33:44.248 "timeout_us": 0, 00:33:44.248 "transport_ack_timeout": 0, 00:33:44.248 "transport_retry_count": 4, 00:33:44.248 "transport_tos": 0 00:33:44.248 } 00:33:44.249 }, 00:33:44.249 { 00:33:44.249 "method": "bdev_nvme_attach_controller", 00:33:44.249 "params": { 00:33:44.249 "adrfam": "IPv4", 00:33:44.249 "ctrlr_loss_timeout_sec": 0, 00:33:44.249 "ddgst": false, 00:33:44.249 "fast_io_fail_timeout_sec": 0, 00:33:44.249 "hdgst": false, 00:33:44.249 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:44.249 "name": "nvme0", 00:33:44.249 "prchk_guard": false, 00:33:44.249 "prchk_reftag": false, 00:33:44.249 "psk": "key0", 00:33:44.249 "reconnect_delay_sec": 0, 00:33:44.249 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:44.249 "traddr": "127.0.0.1", 00:33:44.249 "trsvcid": "4420", 00:33:44.249 "trtype": "TCP" 00:33:44.249 } 00:33:44.249 }, 00:33:44.249 { 00:33:44.249 "method": "bdev_nvme_set_hotplug", 00:33:44.249 "params": { 00:33:44.249 "enable": false, 00:33:44.249 "period_us": 100000 00:33:44.249 } 00:33:44.249 }, 00:33:44.249 { 00:33:44.249 "method": "bdev_wait_for_examine" 00:33:44.249 } 00:33:44.249 ] 00:33:44.249 }, 00:33:44.249 { 00:33:44.249 "subsystem": "nbd", 00:33:44.249 "config": [] 00:33:44.249 } 00:33:44.249 ] 00:33:44.249 }' 00:33:44.249 10:14:57 keyring_file -- keyring/file.sh@114 -- # killprocess 100138 00:33:44.249 10:14:57 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 100138 ']' 00:33:44.249 10:14:57 keyring_file -- common/autotest_common.sh@952 -- # kill -0 100138 00:33:44.249 10:14:57 keyring_file -- common/autotest_common.sh@953 -- # uname 00:33:44.249 10:14:57 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:44.249 10:14:57 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100138 00:33:44.562 10:14:57 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:44.562 10:14:57 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:44.562 10:14:57 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100138' 00:33:44.562 killing process with pid 100138 00:33:44.562 10:14:57 keyring_file -- common/autotest_common.sh@967 -- # kill 100138 00:33:44.562 Received shutdown signal, test time was about 1.000000 seconds 00:33:44.562 00:33:44.563 Latency(us) 00:33:44.563 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:44.563 =================================================================================================================== 00:33:44.563 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:44.563 10:14:57 keyring_file -- common/autotest_common.sh@972 -- # wait 100138 00:33:44.563 10:14:58 keyring_file -- keyring/file.sh@117 -- # bperfpid=100588 00:33:44.563 10:14:58 keyring_file -- keyring/file.sh@119 -- # waitforlisten 100588 /var/tmp/bperf.sock 00:33:44.563 10:14:58 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:33:44.563 "subsystems": [ 00:33:44.563 { 00:33:44.563 "subsystem": "keyring", 00:33:44.563 "config": [ 00:33:44.563 { 00:33:44.563 "method": "keyring_file_add_key", 00:33:44.563 "params": { 00:33:44.563 "name": "key0", 00:33:44.563 "path": "/tmp/tmp.eikiKkNSdz" 00:33:44.563 } 00:33:44.563 }, 00:33:44.563 { 00:33:44.563 "method": "keyring_file_add_key", 00:33:44.563 "params": { 00:33:44.563 "name": "key1", 00:33:44.563 "path": "/tmp/tmp.yN8FeKvBUi" 00:33:44.563 } 00:33:44.563 } 00:33:44.563 ] 00:33:44.563 }, 00:33:44.563 { 00:33:44.563 "subsystem": "iobuf", 00:33:44.563 "config": [ 00:33:44.563 { 00:33:44.563 "method": "iobuf_set_options", 00:33:44.563 "params": { 00:33:44.563 "large_bufsize": 135168, 00:33:44.563 "large_pool_count": 1024, 00:33:44.563 "small_bufsize": 8192, 00:33:44.563 "small_pool_count": 8192 00:33:44.563 } 00:33:44.563 } 00:33:44.563 ] 00:33:44.563 }, 00:33:44.563 { 00:33:44.563 "subsystem": "sock", 00:33:44.563 "config": [ 00:33:44.563 { 00:33:44.563 "method": "sock_set_default_impl", 00:33:44.563 "params": { 00:33:44.563 "impl_name": "posix" 00:33:44.563 } 00:33:44.563 }, 00:33:44.563 { 00:33:44.563 "method": "sock_impl_set_options", 00:33:44.563 "params": { 00:33:44.563 "enable_ktls": false, 00:33:44.563 "enable_placement_id": 0, 00:33:44.563 "enable_quickack": false, 00:33:44.563 "enable_recv_pipe": true, 00:33:44.563 "enable_zerocopy_send_client": false, 00:33:44.563 "enable_zerocopy_send_server": true, 00:33:44.563 "impl_name": "ssl", 00:33:44.563 "recv_buf_size": 4096, 00:33:44.563 "send_buf_size": 4096, 00:33:44.563 "tls_version": 0, 00:33:44.563 "zerocopy_threshold": 0 00:33:44.563 } 00:33:44.563 }, 00:33:44.563 { 00:33:44.563 "method": "sock_impl_set_options", 00:33:44.563 "params": { 00:33:44.563 "enable_ktls": false, 00:33:44.563 "enable_placement_id": 0, 00:33:44.563 "enable_quickack": false, 00:33:44.563 "enable_recv_pipe": true, 00:33:44.563 "enable_zerocopy_send_client": false, 00:33:44.563 "enable_zerocopy_send_server": true, 00:33:44.563 "impl_name": "posix", 00:33:44.563 "recv_buf_size": 2097152, 00:33:44.563 "send_buf_size": 2097152, 00:33:44.563 "tls_version": 0, 00:33:44.563 "zerocopy_threshold": 0 00:33:44.563 } 00:33:44.563 } 00:33:44.563 ] 00:33:44.563 }, 00:33:44.563 { 00:33:44.563 "subsystem": "vmd", 00:33:44.563 "config": [] 00:33:44.563 }, 00:33:44.563 { 00:33:44.563 "subsystem": "accel", 00:33:44.563 "config": [ 00:33:44.563 { 00:33:44.563 "method": "accel_set_options", 00:33:44.563 "params": { 00:33:44.563 "buf_count": 2048, 00:33:44.563 "large_cache_size": 16, 00:33:44.563 "sequence_count": 2048, 00:33:44.563 "small_cache_size": 128, 00:33:44.563 "task_count": 2048 00:33:44.563 } 00:33:44.563 } 00:33:44.563 ] 00:33:44.563 }, 00:33:44.563 { 00:33:44.563 "subsystem": "bdev", 00:33:44.563 "config": [ 00:33:44.563 { 00:33:44.563 "method": "bdev_set_options", 00:33:44.563 "params": { 00:33:44.563 "bdev_auto_examine": true, 00:33:44.563 "bdev_io_cache_size": 256, 00:33:44.563 "bdev_io_pool_size": 65535, 00:33:44.563 "iobuf_large_cache_size": 16, 00:33:44.563 "iobuf_small_cache_size": 128 00:33:44.563 } 00:33:44.563 }, 00:33:44.563 { 00:33:44.563 "method": "bdev_raid_set_options", 00:33:44.563 "params": { 00:33:44.563 "process_window_size_kb": 1024 00:33:44.563 } 00:33:44.563 }, 00:33:44.563 { 00:33:44.563 "method": "bdev_iscsi_set_options", 00:33:44.563 "params": { 00:33:44.563 "timeout_sec": 30 00:33:44.563 } 00:33:44.563 }, 00:33:44.563 { 00:33:44.563 "method": "bdev_nvme_set_options", 00:33:44.563 "params": { 00:33:44.563 "action_on_timeout": "none", 00:33:44.563 "allow_accel_sequence": false, 00:33:44.563 "arbitration_burst": 0, 00:33:44.563 "bdev_retry_count": 3, 00:33:44.563 "ctrlr_loss_timeout_sec": 0, 00:33:44.563 "delay_cmd_submit": true, 00:33:44.563 "dhchap_dhgroups": [ 00:33:44.563 "null", 00:33:44.563 "ffdhe2048", 00:33:44.563 "ffdhe3072", 00:33:44.563 "ffdhe4096", 00:33:44.563 "ffdhe6144", 00:33:44.563 "ffdhe8192" 00:33:44.563 ], 00:33:44.563 "dhchap_digests": [ 00:33:44.563 "sha256", 00:33:44.563 "sha384", 00:33:44.563 "sha512" 00:33:44.563 ], 00:33:44.563 "disable_auto_failback": false, 00:33:44.563 "fast_io_fail_timeout_sec": 0, 00:33:44.563 "generate_uuids": false, 00:33:44.563 "high_priority_weight": 0, 00:33:44.563 "io_path_stat": false, 00:33:44.563 "io_queue_requests": 512, 00:33:44.563 "keep_alive_timeout_ms": 10000, 00:33:44.563 "low_priority_weight": 0, 00:33:44.563 "medium_priority_weight": 0, 00:33:44.563 "nvme_adminq_poll_period_us": 10000, 00:33:44.563 " 10:14:58 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 100588 ']' 00:33:44.563 nvme_error_stat": false, 00:33:44.563 "nvme_ioq_poll_period_us": 0, 00:33:44.563 "rdma_cm_event_timeout_ms": 0, 00:33:44.563 "rdma_max_cq_size": 0, 00:33:44.563 "rdma_srq_size": 0, 00:33:44.563 "reconnect_delay_sec": 0, 00:33:44.563 "timeout_admin_us": 0, 00:33:44.563 "timeout_us": 0, 00:33:44.563 "transport_ack_timeout": 0, 00:33:44.563 "transport_retry_count": 4, 00:33:44.563 "transport_tos": 0 00:33:44.563 } 00:33:44.563 }, 00:33:44.563 { 00:33:44.563 "method": "bdev_nvme_attach_controller", 00:33:44.563 "params": { 00:33:44.563 "adrfam": "IPv4", 00:33:44.563 "ctrlr_loss_timeout_sec": 0, 00:33:44.563 "ddgst": false, 00:33:44.563 "fast_io_fail_timeout_sec": 0, 00:33:44.563 "hdgst": false, 00:33:44.563 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:44.563 "name": "nvme0", 00:33:44.563 "prchk_guard": false, 00:33:44.563 "prchk_reftag": false, 00:33:44.563 "psk": "key0", 00:33:44.563 "reconnect_delay_sec": 0, 00:33:44.563 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:44.563 "traddr": "127.0.0.1", 00:33:44.563 "trsvcid": "4420", 00:33:44.563 "trtype": "TCP" 00:33:44.563 } 00:33:44.563 }, 00:33:44.563 { 00:33:44.563 "method": "bdev_nvme_set_hotplug", 00:33:44.563 "params": { 00:33:44.563 "enable": false, 00:33:44.563 "period_us": 100000 00:33:44.563 } 00:33:44.563 }, 00:33:44.563 { 00:33:44.563 "method": "bdev_wait_for_examine" 00:33:44.563 } 00:33:44.563 ] 00:33:44.563 }, 00:33:44.563 { 00:33:44.563 "subsystem": "nbd", 00:33:44.563 "config": [] 00:33:44.563 } 00:33:44.563 ] 00:33:44.563 }' 00:33:44.563 10:14:58 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:44.563 10:14:58 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:33:44.563 10:14:58 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:44.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:44.563 10:14:58 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:44.563 10:14:58 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:44.563 10:14:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:44.563 [2024-07-15 10:14:58.077178] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:33:44.563 [2024-07-15 10:14:58.077254] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100588 ] 00:33:44.822 [2024-07-15 10:14:58.214003] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:44.822 [2024-07-15 10:14:58.318733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:45.080 [2024-07-15 10:14:58.480407] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:45.647 10:14:58 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:45.647 10:14:58 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:33:45.647 10:14:58 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:33:45.647 10:14:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:45.647 10:14:58 keyring_file -- keyring/file.sh@120 -- # jq length 00:33:45.647 10:14:59 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:33:45.647 10:14:59 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:33:45.647 10:14:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:45.647 10:14:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:45.647 10:14:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:45.647 10:14:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:45.647 10:14:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:45.905 10:14:59 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:33:45.905 10:14:59 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:33:45.905 10:14:59 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:45.905 10:14:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:45.905 10:14:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:45.905 10:14:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:45.905 10:14:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:46.164 10:14:59 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:33:46.164 10:14:59 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:33:46.164 10:14:59 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:33:46.164 10:14:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:33:46.423 10:14:59 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:33:46.423 10:14:59 keyring_file -- keyring/file.sh@1 -- # cleanup 00:33:46.423 10:14:59 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.eikiKkNSdz /tmp/tmp.yN8FeKvBUi 00:33:46.423 10:14:59 keyring_file -- keyring/file.sh@20 -- # killprocess 100588 00:33:46.423 10:14:59 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 100588 ']' 00:33:46.423 10:14:59 keyring_file -- common/autotest_common.sh@952 -- # kill -0 100588 00:33:46.423 10:14:59 keyring_file -- common/autotest_common.sh@953 -- # uname 00:33:46.423 10:14:59 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:46.423 10:14:59 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100588 00:33:46.423 10:14:59 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:46.423 10:14:59 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:46.423 killing process with pid 100588 00:33:46.423 10:14:59 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100588' 00:33:46.423 10:14:59 keyring_file -- common/autotest_common.sh@967 -- # kill 100588 00:33:46.423 Received shutdown signal, test time was about 1.000000 seconds 00:33:46.423 00:33:46.423 Latency(us) 00:33:46.423 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:46.423 =================================================================================================================== 00:33:46.423 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:46.423 10:14:59 keyring_file -- common/autotest_common.sh@972 -- # wait 100588 00:33:46.423 10:14:59 keyring_file -- keyring/file.sh@21 -- # killprocess 100104 00:33:46.423 10:14:59 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 100104 ']' 00:33:46.423 10:14:59 keyring_file -- common/autotest_common.sh@952 -- # kill -0 100104 00:33:46.423 10:14:59 keyring_file -- common/autotest_common.sh@953 -- # uname 00:33:46.423 10:14:59 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:46.423 10:14:59 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100104 00:33:46.681 10:15:00 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:46.681 10:15:00 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:46.681 killing process with pid 100104 00:33:46.681 10:15:00 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100104' 00:33:46.681 10:15:00 keyring_file -- common/autotest_common.sh@967 -- # kill 100104 00:33:46.681 [2024-07-15 10:15:00.013833] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:33:46.681 10:15:00 keyring_file -- common/autotest_common.sh@972 -- # wait 100104 00:33:46.940 00:33:46.940 real 0m13.468s 00:33:46.940 user 0m32.643s 00:33:46.940 sys 0m3.068s 00:33:46.940 10:15:00 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:46.940 10:15:00 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:46.940 ************************************ 00:33:46.940 END TEST keyring_file 00:33:46.940 ************************************ 00:33:46.940 10:15:00 -- common/autotest_common.sh@1142 -- # return 0 00:33:46.940 10:15:00 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:33:46.940 10:15:00 -- spdk/autotest.sh@297 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:33:46.940 10:15:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:46.940 10:15:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:46.940 10:15:00 -- common/autotest_common.sh@10 -- # set +x 00:33:46.940 ************************************ 00:33:46.940 START TEST keyring_linux 00:33:46.940 ************************************ 00:33:46.940 10:15:00 keyring_linux -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:33:46.940 * Looking for test storage... 00:33:46.940 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:33:46.940 10:15:00 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:33:46.941 10:15:00 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:46.941 10:15:00 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:33:47.200 10:15:00 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:47.200 10:15:00 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:47.200 10:15:00 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:47.200 10:15:00 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:47.200 10:15:00 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:47.200 10:15:00 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:47.200 10:15:00 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:47.200 10:15:00 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:47.200 10:15:00 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:47.200 10:15:00 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:47.200 10:15:00 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:33:47.200 10:15:00 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=a2b6b25a-cc90-4aea-9f09-c06f8a634aec 00:33:47.200 10:15:00 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:47.200 10:15:00 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:47.200 10:15:00 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:47.200 10:15:00 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:47.200 10:15:00 keyring_linux -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:47.200 10:15:00 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:47.200 10:15:00 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:47.200 10:15:00 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:47.200 10:15:00 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.200 10:15:00 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.200 10:15:00 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.200 10:15:00 keyring_linux -- paths/export.sh@5 -- # export PATH 00:33:47.200 10:15:00 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.200 10:15:00 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:33:47.200 10:15:00 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:47.200 10:15:00 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:47.200 10:15:00 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:47.200 10:15:00 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:47.200 10:15:00 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:47.200 10:15:00 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:47.200 10:15:00 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:47.200 10:15:00 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:47.200 10:15:00 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:33:47.200 10:15:00 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:33:47.200 10:15:00 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:33:47.200 10:15:00 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:33:47.200 10:15:00 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:33:47.200 10:15:00 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:33:47.200 10:15:00 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:33:47.200 10:15:00 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:33:47.200 10:15:00 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:33:47.200 10:15:00 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:47.200 10:15:00 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:33:47.200 10:15:00 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:33:47.200 10:15:00 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:47.200 10:15:00 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:47.200 10:15:00 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:33:47.200 10:15:00 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:47.200 10:15:00 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:33:47.200 10:15:00 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:33:47.200 10:15:00 keyring_linux -- nvmf/common.sh@705 -- # python - 00:33:47.200 10:15:00 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:33:47.200 /tmp/:spdk-test:key0 00:33:47.200 10:15:00 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:33:47.200 10:15:00 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:33:47.200 10:15:00 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:33:47.200 10:15:00 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:33:47.200 10:15:00 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:33:47.200 10:15:00 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:33:47.200 10:15:00 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:33:47.200 10:15:00 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:33:47.200 10:15:00 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:33:47.200 10:15:00 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:33:47.200 10:15:00 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:47.201 10:15:00 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:33:47.201 10:15:00 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:33:47.201 10:15:00 keyring_linux -- nvmf/common.sh@705 -- # python - 00:33:47.201 10:15:00 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:33:47.201 /tmp/:spdk-test:key1 00:33:47.201 10:15:00 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:33:47.201 10:15:00 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=100737 00:33:47.201 10:15:00 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:47.201 10:15:00 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 100737 00:33:47.201 10:15:00 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 100737 ']' 00:33:47.201 10:15:00 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:47.201 10:15:00 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:47.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:47.201 10:15:00 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:47.201 10:15:00 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:47.201 10:15:00 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:47.201 [2024-07-15 10:15:00.722919] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:33:47.201 [2024-07-15 10:15:00.722993] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100737 ] 00:33:47.459 [2024-07-15 10:15:00.859815] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:47.459 [2024-07-15 10:15:00.964147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:48.026 10:15:01 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:48.026 10:15:01 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:33:48.026 10:15:01 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:33:48.026 10:15:01 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:48.026 10:15:01 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:48.026 [2024-07-15 10:15:01.593709] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:48.284 null0 00:33:48.284 [2024-07-15 10:15:01.625591] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:33:48.284 [2024-07-15 10:15:01.625791] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:48.284 10:15:01 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:48.284 10:15:01 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:33:48.284 832346779 00:33:48.284 10:15:01 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:33:48.284 48133732 00:33:48.284 10:15:01 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=100772 00:33:48.284 10:15:01 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:33:48.284 10:15:01 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 100772 /var/tmp/bperf.sock 00:33:48.284 10:15:01 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 100772 ']' 00:33:48.284 10:15:01 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:48.284 10:15:01 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:48.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:48.284 10:15:01 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:48.284 10:15:01 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:48.284 10:15:01 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:48.284 [2024-07-15 10:15:01.707051] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:33:48.284 [2024-07-15 10:15:01.707131] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100772 ] 00:33:48.284 [2024-07-15 10:15:01.845219] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:48.543 [2024-07-15 10:15:01.949464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:49.134 10:15:02 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:49.134 10:15:02 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:33:49.134 10:15:02 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:33:49.134 10:15:02 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:33:49.393 10:15:02 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:33:49.393 10:15:02 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:49.651 10:15:03 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:33:49.651 10:15:03 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:33:49.651 [2024-07-15 10:15:03.198361] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:49.914 nvme0n1 00:33:49.914 10:15:03 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:33:49.914 10:15:03 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:33:49.914 10:15:03 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:33:49.914 10:15:03 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:33:49.914 10:15:03 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:49.914 10:15:03 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:33:50.171 10:15:03 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:33:50.171 10:15:03 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:33:50.171 10:15:03 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:33:50.171 10:15:03 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:33:50.171 10:15:03 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:50.171 10:15:03 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:50.171 10:15:03 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:33:50.171 10:15:03 keyring_linux -- keyring/linux.sh@25 -- # sn=832346779 00:33:50.171 10:15:03 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:33:50.171 10:15:03 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:33:50.171 10:15:03 keyring_linux -- keyring/linux.sh@26 -- # [[ 832346779 == \8\3\2\3\4\6\7\7\9 ]] 00:33:50.171 10:15:03 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 832346779 00:33:50.171 10:15:03 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:33:50.172 10:15:03 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:50.430 Running I/O for 1 seconds... 00:33:51.368 00:33:51.368 Latency(us) 00:33:51.368 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:51.368 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:51.368 nvme0n1 : 1.01 20356.65 79.52 0.00 0.00 6263.29 4807.88 10703.26 00:33:51.368 =================================================================================================================== 00:33:51.368 Total : 20356.65 79.52 0.00 0.00 6263.29 4807.88 10703.26 00:33:51.368 0 00:33:51.368 10:15:04 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:51.368 10:15:04 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:51.638 10:15:05 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:33:51.638 10:15:05 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:33:51.638 10:15:05 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:33:51.638 10:15:05 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:33:51.638 10:15:05 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:33:51.638 10:15:05 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:51.907 10:15:05 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:33:51.907 10:15:05 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:33:51.907 10:15:05 keyring_linux -- keyring/linux.sh@23 -- # return 00:33:51.907 10:15:05 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:51.907 10:15:05 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:33:51.907 10:15:05 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:51.907 10:15:05 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:33:51.907 10:15:05 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:51.907 10:15:05 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:33:51.907 10:15:05 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:51.907 10:15:05 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:51.907 10:15:05 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:51.907 [2024-07-15 10:15:05.440634] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:51.907 [2024-07-15 10:15:05.441292] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20d6ea0 (107): Transport endpoint is not connected 00:33:51.908 [2024-07-15 10:15:05.442279] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20d6ea0 (9): Bad file descriptor 00:33:51.908 [2024-07-15 10:15:05.443275] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:51.908 [2024-07-15 10:15:05.443292] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:51.908 [2024-07-15 10:15:05.443301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:51.908 2024/07/15 10:15:05 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk::spdk-test:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:33:51.908 request: 00:33:51.908 { 00:33:51.908 "method": "bdev_nvme_attach_controller", 00:33:51.908 "params": { 00:33:51.908 "name": "nvme0", 00:33:51.908 "trtype": "tcp", 00:33:51.908 "traddr": "127.0.0.1", 00:33:51.908 "adrfam": "ipv4", 00:33:51.908 "trsvcid": "4420", 00:33:51.908 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:51.908 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:51.908 "prchk_reftag": false, 00:33:51.908 "prchk_guard": false, 00:33:51.908 "hdgst": false, 00:33:51.908 "ddgst": false, 00:33:51.908 "psk": ":spdk-test:key1" 00:33:51.908 } 00:33:51.908 } 00:33:51.908 Got JSON-RPC error response 00:33:51.908 GoRPCClient: error on JSON-RPC call 00:33:51.908 10:15:05 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:33:51.908 10:15:05 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:51.908 10:15:05 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:51.908 10:15:05 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:51.908 10:15:05 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:33:51.908 10:15:05 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:33:51.908 10:15:05 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:33:51.908 10:15:05 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:33:51.908 10:15:05 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:33:51.908 10:15:05 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:33:51.908 10:15:05 keyring_linux -- keyring/linux.sh@33 -- # sn=832346779 00:33:51.908 10:15:05 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 832346779 00:33:51.908 1 links removed 00:33:51.908 10:15:05 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:33:51.908 10:15:05 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:33:51.908 10:15:05 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:33:51.908 10:15:05 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:33:51.908 10:15:05 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:33:51.908 10:15:05 keyring_linux -- keyring/linux.sh@33 -- # sn=48133732 00:33:51.908 10:15:05 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 48133732 00:33:52.168 1 links removed 00:33:52.168 10:15:05 keyring_linux -- keyring/linux.sh@41 -- # killprocess 100772 00:33:52.168 10:15:05 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 100772 ']' 00:33:52.168 10:15:05 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 100772 00:33:52.168 10:15:05 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:33:52.168 10:15:05 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:52.168 10:15:05 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100772 00:33:52.168 10:15:05 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:52.168 10:15:05 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:52.168 killing process with pid 100772 00:33:52.168 10:15:05 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100772' 00:33:52.168 10:15:05 keyring_linux -- common/autotest_common.sh@967 -- # kill 100772 00:33:52.168 Received shutdown signal, test time was about 1.000000 seconds 00:33:52.168 00:33:52.168 Latency(us) 00:33:52.168 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:52.168 =================================================================================================================== 00:33:52.168 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:52.168 10:15:05 keyring_linux -- common/autotest_common.sh@972 -- # wait 100772 00:33:52.168 10:15:05 keyring_linux -- keyring/linux.sh@42 -- # killprocess 100737 00:33:52.168 10:15:05 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 100737 ']' 00:33:52.168 10:15:05 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 100737 00:33:52.168 10:15:05 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:33:52.168 10:15:05 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:52.168 10:15:05 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100737 00:33:52.168 10:15:05 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:52.168 10:15:05 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:52.168 killing process with pid 100737 00:33:52.168 10:15:05 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100737' 00:33:52.168 10:15:05 keyring_linux -- common/autotest_common.sh@967 -- # kill 100737 00:33:52.168 10:15:05 keyring_linux -- common/autotest_common.sh@972 -- # wait 100737 00:33:52.736 00:33:52.736 real 0m5.672s 00:33:52.736 user 0m10.493s 00:33:52.736 sys 0m1.569s 00:33:52.736 10:15:06 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:52.736 10:15:06 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:52.736 ************************************ 00:33:52.736 END TEST keyring_linux 00:33:52.736 ************************************ 00:33:52.736 10:15:06 -- common/autotest_common.sh@1142 -- # return 0 00:33:52.736 10:15:06 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:33:52.736 10:15:06 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:33:52.736 10:15:06 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:33:52.736 10:15:06 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:33:52.736 10:15:06 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:33:52.736 10:15:06 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:33:52.736 10:15:06 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:33:52.736 10:15:06 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:33:52.736 10:15:06 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:33:52.736 10:15:06 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:33:52.736 10:15:06 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:33:52.736 10:15:06 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:33:52.736 10:15:06 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:33:52.736 10:15:06 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:33:52.736 10:15:06 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:33:52.736 10:15:06 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:33:52.736 10:15:06 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:33:52.736 10:15:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:52.736 10:15:06 -- common/autotest_common.sh@10 -- # set +x 00:33:52.736 10:15:06 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:33:52.736 10:15:06 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:33:52.736 10:15:06 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:33:52.736 10:15:06 -- common/autotest_common.sh@10 -- # set +x 00:33:54.641 INFO: APP EXITING 00:33:54.641 INFO: killing all VMs 00:33:54.641 INFO: killing vhost app 00:33:54.641 INFO: EXIT DONE 00:33:55.576 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:55.576 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:33:55.576 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:33:56.514 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:56.514 Cleaning 00:33:56.514 Removing: /var/run/dpdk/spdk0/config 00:33:56.514 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:56.514 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:56.514 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:56.514 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:56.514 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:56.514 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:56.514 Removing: /var/run/dpdk/spdk1/config 00:33:56.514 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:33:56.514 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:33:56.514 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:33:56.514 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:33:56.514 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:33:56.514 Removing: /var/run/dpdk/spdk1/hugepage_info 00:33:56.514 Removing: /var/run/dpdk/spdk2/config 00:33:56.514 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:33:56.514 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:33:56.514 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:33:56.514 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:33:56.514 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:33:56.514 Removing: /var/run/dpdk/spdk2/hugepage_info 00:33:56.514 Removing: /var/run/dpdk/spdk3/config 00:33:56.514 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:33:56.514 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:33:56.514 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:33:56.514 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:33:56.514 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:33:56.514 Removing: /var/run/dpdk/spdk3/hugepage_info 00:33:56.514 Removing: /var/run/dpdk/spdk4/config 00:33:56.514 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:33:56.514 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:33:56.514 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:33:56.514 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:33:56.514 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:33:56.514 Removing: /var/run/dpdk/spdk4/hugepage_info 00:33:56.514 Removing: /dev/shm/nvmf_trace.0 00:33:56.514 Removing: /dev/shm/spdk_tgt_trace.pid60694 00:33:56.514 Removing: /var/run/dpdk/spdk0 00:33:56.514 Removing: /var/run/dpdk/spdk1 00:33:56.514 Removing: /var/run/dpdk/spdk2 00:33:56.514 Removing: /var/run/dpdk/spdk3 00:33:56.514 Removing: /var/run/dpdk/spdk4 00:33:56.773 Removing: /var/run/dpdk/spdk_pid100104 00:33:56.773 Removing: /var/run/dpdk/spdk_pid100138 00:33:56.773 Removing: /var/run/dpdk/spdk_pid100588 00:33:56.773 Removing: /var/run/dpdk/spdk_pid100737 00:33:56.773 Removing: /var/run/dpdk/spdk_pid100772 00:33:56.773 Removing: /var/run/dpdk/spdk_pid60555 00:33:56.773 Removing: /var/run/dpdk/spdk_pid60694 00:33:56.773 Removing: /var/run/dpdk/spdk_pid60956 00:33:56.773 Removing: /var/run/dpdk/spdk_pid61049 00:33:56.773 Removing: /var/run/dpdk/spdk_pid61088 00:33:56.773 Removing: /var/run/dpdk/spdk_pid61198 00:33:56.773 Removing: /var/run/dpdk/spdk_pid61223 00:33:56.773 Removing: /var/run/dpdk/spdk_pid61346 00:33:56.773 Removing: /var/run/dpdk/spdk_pid61616 00:33:56.773 Removing: /var/run/dpdk/spdk_pid61781 00:33:56.773 Removing: /var/run/dpdk/spdk_pid61863 00:33:56.773 Removing: /var/run/dpdk/spdk_pid61950 00:33:56.773 Removing: /var/run/dpdk/spdk_pid62039 00:33:56.773 Removing: /var/run/dpdk/spdk_pid62072 00:33:56.773 Removing: /var/run/dpdk/spdk_pid62113 00:33:56.773 Removing: /var/run/dpdk/spdk_pid62169 00:33:56.773 Removing: /var/run/dpdk/spdk_pid62303 00:33:56.773 Removing: /var/run/dpdk/spdk_pid62919 00:33:56.773 Removing: /var/run/dpdk/spdk_pid62977 00:33:56.773 Removing: /var/run/dpdk/spdk_pid63041 00:33:56.773 Removing: /var/run/dpdk/spdk_pid63069 00:33:56.773 Removing: /var/run/dpdk/spdk_pid63138 00:33:56.773 Removing: /var/run/dpdk/spdk_pid63165 00:33:56.773 Removing: /var/run/dpdk/spdk_pid63244 00:33:56.773 Removing: /var/run/dpdk/spdk_pid63272 00:33:56.773 Removing: /var/run/dpdk/spdk_pid63318 00:33:56.773 Removing: /var/run/dpdk/spdk_pid63348 00:33:56.773 Removing: /var/run/dpdk/spdk_pid63394 00:33:56.773 Removing: /var/run/dpdk/spdk_pid63424 00:33:56.773 Removing: /var/run/dpdk/spdk_pid63565 00:33:56.773 Removing: /var/run/dpdk/spdk_pid63606 00:33:56.773 Removing: /var/run/dpdk/spdk_pid63675 00:33:56.773 Removing: /var/run/dpdk/spdk_pid63749 00:33:56.773 Removing: /var/run/dpdk/spdk_pid63769 00:33:56.773 Removing: /var/run/dpdk/spdk_pid63833 00:33:56.774 Removing: /var/run/dpdk/spdk_pid63862 00:33:56.774 Removing: /var/run/dpdk/spdk_pid63898 00:33:56.774 Removing: /var/run/dpdk/spdk_pid63931 00:33:56.774 Removing: /var/run/dpdk/spdk_pid63966 00:33:56.774 Removing: /var/run/dpdk/spdk_pid64000 00:33:56.774 Removing: /var/run/dpdk/spdk_pid64035 00:33:56.774 Removing: /var/run/dpdk/spdk_pid64069 00:33:56.774 Removing: /var/run/dpdk/spdk_pid64104 00:33:56.774 Removing: /var/run/dpdk/spdk_pid64138 00:33:56.774 Removing: /var/run/dpdk/spdk_pid64173 00:33:56.774 Removing: /var/run/dpdk/spdk_pid64205 00:33:56.774 Removing: /var/run/dpdk/spdk_pid64244 00:33:56.774 Removing: /var/run/dpdk/spdk_pid64273 00:33:56.774 Removing: /var/run/dpdk/spdk_pid64313 00:33:56.774 Removing: /var/run/dpdk/spdk_pid64342 00:33:56.774 Removing: /var/run/dpdk/spdk_pid64382 00:33:56.774 Removing: /var/run/dpdk/spdk_pid64415 00:33:56.774 Removing: /var/run/dpdk/spdk_pid64458 00:33:56.774 Removing: /var/run/dpdk/spdk_pid64489 00:33:56.774 Removing: /var/run/dpdk/spdk_pid64530 00:33:56.774 Removing: /var/run/dpdk/spdk_pid64596 00:33:56.774 Removing: /var/run/dpdk/spdk_pid64709 00:33:56.774 Removing: /var/run/dpdk/spdk_pid65108 00:33:56.774 Removing: /var/run/dpdk/spdk_pid68461 00:33:56.774 Removing: /var/run/dpdk/spdk_pid68806 00:33:57.032 Removing: /var/run/dpdk/spdk_pid71276 00:33:57.032 Removing: /var/run/dpdk/spdk_pid71648 00:33:57.032 Removing: /var/run/dpdk/spdk_pid71884 00:33:57.032 Removing: /var/run/dpdk/spdk_pid71930 00:33:57.032 Removing: /var/run/dpdk/spdk_pid72535 00:33:57.032 Removing: /var/run/dpdk/spdk_pid72972 00:33:57.032 Removing: /var/run/dpdk/spdk_pid73022 00:33:57.032 Removing: /var/run/dpdk/spdk_pid73379 00:33:57.032 Removing: /var/run/dpdk/spdk_pid73907 00:33:57.032 Removing: /var/run/dpdk/spdk_pid74346 00:33:57.032 Removing: /var/run/dpdk/spdk_pid75309 00:33:57.032 Removing: /var/run/dpdk/spdk_pid76282 00:33:57.032 Removing: /var/run/dpdk/spdk_pid76399 00:33:57.032 Removing: /var/run/dpdk/spdk_pid76468 00:33:57.032 Removing: /var/run/dpdk/spdk_pid77921 00:33:57.032 Removing: /var/run/dpdk/spdk_pid78148 00:33:57.032 Removing: /var/run/dpdk/spdk_pid83206 00:33:57.032 Removing: /var/run/dpdk/spdk_pid83627 00:33:57.032 Removing: /var/run/dpdk/spdk_pid83735 00:33:57.032 Removing: /var/run/dpdk/spdk_pid83887 00:33:57.032 Removing: /var/run/dpdk/spdk_pid83932 00:33:57.032 Removing: /var/run/dpdk/spdk_pid83972 00:33:57.032 Removing: /var/run/dpdk/spdk_pid84013 00:33:57.032 Removing: /var/run/dpdk/spdk_pid84166 00:33:57.032 Removing: /var/run/dpdk/spdk_pid84318 00:33:57.032 Removing: /var/run/dpdk/spdk_pid84572 00:33:57.032 Removing: /var/run/dpdk/spdk_pid84689 00:33:57.032 Removing: /var/run/dpdk/spdk_pid84938 00:33:57.032 Removing: /var/run/dpdk/spdk_pid85059 00:33:57.032 Removing: /var/run/dpdk/spdk_pid85188 00:33:57.032 Removing: /var/run/dpdk/spdk_pid85526 00:33:57.032 Removing: /var/run/dpdk/spdk_pid85944 00:33:57.032 Removing: /var/run/dpdk/spdk_pid86251 00:33:57.032 Removing: /var/run/dpdk/spdk_pid86745 00:33:57.033 Removing: /var/run/dpdk/spdk_pid86752 00:33:57.033 Removing: /var/run/dpdk/spdk_pid87085 00:33:57.033 Removing: /var/run/dpdk/spdk_pid87108 00:33:57.033 Removing: /var/run/dpdk/spdk_pid87125 00:33:57.033 Removing: /var/run/dpdk/spdk_pid87156 00:33:57.033 Removing: /var/run/dpdk/spdk_pid87161 00:33:57.033 Removing: /var/run/dpdk/spdk_pid87523 00:33:57.033 Removing: /var/run/dpdk/spdk_pid87568 00:33:57.033 Removing: /var/run/dpdk/spdk_pid87908 00:33:57.033 Removing: /var/run/dpdk/spdk_pid88153 00:33:57.033 Removing: /var/run/dpdk/spdk_pid88639 00:33:57.033 Removing: /var/run/dpdk/spdk_pid89226 00:33:57.033 Removing: /var/run/dpdk/spdk_pid90528 00:33:57.033 Removing: /var/run/dpdk/spdk_pid91125 00:33:57.033 Removing: /var/run/dpdk/spdk_pid91133 00:33:57.033 Removing: /var/run/dpdk/spdk_pid93044 00:33:57.033 Removing: /var/run/dpdk/spdk_pid93132 00:33:57.033 Removing: /var/run/dpdk/spdk_pid93223 00:33:57.033 Removing: /var/run/dpdk/spdk_pid93308 00:33:57.033 Removing: /var/run/dpdk/spdk_pid93465 00:33:57.033 Removing: /var/run/dpdk/spdk_pid93550 00:33:57.033 Removing: /var/run/dpdk/spdk_pid93640 00:33:57.033 Removing: /var/run/dpdk/spdk_pid93725 00:33:57.033 Removing: /var/run/dpdk/spdk_pid94068 00:33:57.033 Removing: /var/run/dpdk/spdk_pid94762 00:33:57.033 Removing: /var/run/dpdk/spdk_pid96111 00:33:57.033 Removing: /var/run/dpdk/spdk_pid96314 00:33:57.033 Removing: /var/run/dpdk/spdk_pid96603 00:33:57.033 Removing: /var/run/dpdk/spdk_pid96902 00:33:57.033 Removing: /var/run/dpdk/spdk_pid97461 00:33:57.033 Removing: /var/run/dpdk/spdk_pid97473 00:33:57.294 Removing: /var/run/dpdk/spdk_pid97828 00:33:57.294 Removing: /var/run/dpdk/spdk_pid97988 00:33:57.294 Removing: /var/run/dpdk/spdk_pid98150 00:33:57.294 Removing: /var/run/dpdk/spdk_pid98247 00:33:57.294 Removing: /var/run/dpdk/spdk_pid98407 00:33:57.294 Removing: /var/run/dpdk/spdk_pid98516 00:33:57.294 Removing: /var/run/dpdk/spdk_pid99196 00:33:57.294 Removing: /var/run/dpdk/spdk_pid99231 00:33:57.294 Removing: /var/run/dpdk/spdk_pid99266 00:33:57.294 Removing: /var/run/dpdk/spdk_pid99541 00:33:57.294 Removing: /var/run/dpdk/spdk_pid99571 00:33:57.294 Removing: /var/run/dpdk/spdk_pid99610 00:33:57.294 Clean 00:33:57.294 10:15:10 -- common/autotest_common.sh@1451 -- # return 0 00:33:57.294 10:15:10 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:33:57.294 10:15:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:57.294 10:15:10 -- common/autotest_common.sh@10 -- # set +x 00:33:57.294 10:15:10 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:33:57.294 10:15:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:57.294 10:15:10 -- common/autotest_common.sh@10 -- # set +x 00:33:57.294 10:15:10 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:33:57.294 10:15:10 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:33:57.294 10:15:10 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:33:57.294 10:15:10 -- spdk/autotest.sh@391 -- # hash lcov 00:33:57.294 10:15:10 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:33:57.294 10:15:10 -- spdk/autotest.sh@393 -- # hostname 00:33:57.294 10:15:10 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:33:57.553 geninfo: WARNING: invalid characters removed from testname! 00:34:24.108 10:15:33 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:24.108 10:15:36 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:25.045 10:15:38 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:27.598 10:15:40 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:29.502 10:15:42 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:31.405 10:15:44 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:33.941 10:15:47 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:33.941 10:15:47 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:33.941 10:15:47 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:34:33.941 10:15:47 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:33.941 10:15:47 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:33.941 10:15:47 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.941 10:15:47 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.941 10:15:47 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.941 10:15:47 -- paths/export.sh@5 -- $ export PATH 00:34:33.941 10:15:47 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.941 10:15:47 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:34:33.941 10:15:47 -- common/autobuild_common.sh@444 -- $ date +%s 00:34:33.941 10:15:47 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721038547.XXXXXX 00:34:33.941 10:15:47 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721038547.ycbDGQ 00:34:33.941 10:15:47 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:34:33.941 10:15:47 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:34:33.941 10:15:47 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:34:33.941 10:15:47 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:34:33.941 10:15:47 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:34:33.941 10:15:47 -- common/autobuild_common.sh@460 -- $ get_config_params 00:34:33.941 10:15:47 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:34:33.941 10:15:47 -- common/autotest_common.sh@10 -- $ set +x 00:34:33.941 10:15:47 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:34:33.941 10:15:47 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:34:33.941 10:15:47 -- pm/common@17 -- $ local monitor 00:34:33.941 10:15:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:33.941 10:15:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:33.941 10:15:47 -- pm/common@25 -- $ sleep 1 00:34:33.941 10:15:47 -- pm/common@21 -- $ date +%s 00:34:33.941 10:15:47 -- pm/common@21 -- $ date +%s 00:34:33.941 10:15:47 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721038547 00:34:33.941 10:15:47 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721038547 00:34:33.941 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721038547_collect-vmstat.pm.log 00:34:33.941 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721038547_collect-cpu-load.pm.log 00:34:34.891 10:15:48 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:34:34.891 10:15:48 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:34:34.891 10:15:48 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:34:34.891 10:15:48 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:34:34.891 10:15:48 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:34:34.891 10:15:48 -- spdk/autopackage.sh@19 -- $ timing_finish 00:34:34.891 10:15:48 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:34.891 10:15:48 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:34:34.891 10:15:48 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:34.891 10:15:48 -- spdk/autopackage.sh@20 -- $ exit 0 00:34:34.891 10:15:48 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:34:34.891 10:15:48 -- pm/common@29 -- $ signal_monitor_resources TERM 00:34:34.891 10:15:48 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:34:34.891 10:15:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:34.891 10:15:48 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:34:34.891 10:15:48 -- pm/common@44 -- $ pid=102512 00:34:34.891 10:15:48 -- pm/common@50 -- $ kill -TERM 102512 00:34:34.891 10:15:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:34.891 10:15:48 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:34:34.891 10:15:48 -- pm/common@44 -- $ pid=102514 00:34:34.891 10:15:48 -- pm/common@50 -- $ kill -TERM 102514 00:34:34.891 + [[ -n 5322 ]] 00:34:34.891 + sudo kill 5322 00:34:34.902 [Pipeline] } 00:34:34.923 [Pipeline] // timeout 00:34:34.928 [Pipeline] } 00:34:34.946 [Pipeline] // stage 00:34:34.952 [Pipeline] } 00:34:34.968 [Pipeline] // catchError 00:34:34.978 [Pipeline] stage 00:34:34.980 [Pipeline] { (Stop VM) 00:34:34.993 [Pipeline] sh 00:34:35.275 + vagrant halt 00:34:37.812 ==> default: Halting domain... 00:34:45.979 [Pipeline] sh 00:34:46.261 + vagrant destroy -f 00:34:48.793 ==> default: Removing domain... 00:34:49.061 [Pipeline] sh 00:34:49.346 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:34:49.357 [Pipeline] } 00:34:49.379 [Pipeline] // stage 00:34:49.384 [Pipeline] } 00:34:49.403 [Pipeline] // dir 00:34:49.409 [Pipeline] } 00:34:49.427 [Pipeline] // wrap 00:34:49.434 [Pipeline] } 00:34:49.451 [Pipeline] // catchError 00:34:49.459 [Pipeline] stage 00:34:49.460 [Pipeline] { (Epilogue) 00:34:49.472 [Pipeline] sh 00:34:49.760 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:55.109 [Pipeline] catchError 00:34:55.111 [Pipeline] { 00:34:55.126 [Pipeline] sh 00:34:55.411 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:55.411 Artifacts sizes are good 00:34:55.421 [Pipeline] } 00:34:55.439 [Pipeline] // catchError 00:34:55.450 [Pipeline] archiveArtifacts 00:34:55.458 Archiving artifacts 00:34:55.610 [Pipeline] cleanWs 00:34:55.622 [WS-CLEANUP] Deleting project workspace... 00:34:55.622 [WS-CLEANUP] Deferred wipeout is used... 00:34:55.629 [WS-CLEANUP] done 00:34:55.631 [Pipeline] } 00:34:55.649 [Pipeline] // stage 00:34:55.654 [Pipeline] } 00:34:55.670 [Pipeline] // node 00:34:55.675 [Pipeline] End of Pipeline 00:34:55.711 Finished: SUCCESS